Git Product home page Git Product logo

jfrog-cli-core's Introduction

jfrog-cli-core

Scanned by Frogbot

Branch Status
master Test Static Analysis
dev Test Static Analysis

General

jfrog-cli-core is a go module which contains the core code components used by the JFrog CLI source code.

Pull Requests

We welcome pull requests from the community.

Guidelines

  • If the existing tests do not already cover your changes, please add tests.
  • Pull requests should be created on the dev branch.
  • Please use gofmt for formatting the code before submitting the pull request.

Tests

To run the tests, execute the following command from within the root directory of the project:

go test -v github.com/jfrog/jfrog-cli-core/v2/tests -timeout 0

jfrog-cli-core's People

Contributors

alexeivainshtein avatar asaf-federman avatar asafambar avatar asafgabai avatar attiasas avatar barbelity avatar broekema41 avatar dependabot[bot] avatar dimanevelev avatar eranturgeman avatar eyalb4doc avatar eyalbe4 avatar eyaldelarea avatar freddy33 avatar gailazar300 avatar galusben avatar github-actions[bot] avatar itaraviv avatar kowalczykp avatar liron-shalom avatar omerzi avatar or-geva avatar orto17 avatar robinino avatar sarao1310 avatar sverdlov93 avatar talarian1 avatar tamirhadad avatar yahavi avatar yoav avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jfrog-cli-core's Issues

.NET Project dependencies not found when project is not located under solution directory

Describe the bug
If the .NET project file is not located in a directory below the solution file no dependencies are found.

To Reproduce
Create a project with the following directory layout:

  • tmp\solution.sln
  • project1\project1.csproj

Run the JFrog CLI to record the dependencies when packages are restored. "jfrog dotnet restore tmp\solution.sln --build-name test --build-number 1"
project1 is reported with "Project dependencies was not found for project"

Expected behavior
I expect that dependencies are found.

Versions

  • JFrog CLI version (if applicable): 2.15.1

log message _Deleting 0xc27ae0 files_ with `rt git-lfs-clean`

Describe the bug

Weird log message is presented to the user when running rt git-lfs-clean

Current behavior

User sees a message starting with Deleting 0xc27ae0 files from, note the hexadecimal notation.

Reproduction steps

  1. Run JFROG_CLI_LOG_LEVEL=INFO jf rt git-lfs-clean with sensible parameters in a sensible environment
  2. Among other messages, find in the log:

Deleting 0xc27ae0 files

Expected behavior

A sensible message, e.g.:

Deleting 1 files

JFrog CLI-Core version

2.23.0

JFrog CLI version (if applicable)

2.28.0

Operating system type and version

Debian Linux 11

JFrog Artifactory version

No response

JFrog Xray version

No response

Logs in 'oc start-build' might be messy

Describe the bug
When running the 'oc start-build' command with --follow flag, both stdout and stderr of the OpenShift CLI are printed to the JFrog CLI's stderr.
OpenShift CLI's stderr is written directly to JFrog CLI's stderr, but OpenShift CLI's stdout is piped and copied to JFrog CLI's stdout.
We did that because we need to read the first line of the stdout, but this inconsistency might cause the stdout and stderr to be unorganized and be printed in a different order.
Consider using AsyncMultiWriter (which is in build-info-go).

Expected behavior
Find a way to ensure the logs from both stdout and stderr are printed in the right order.

Allow `jf poetry publish` to use the -r/--repository option to set the deploy repo

Is your feature request related to a problem? Please describe.
Currently a user can't use a virtual pypi repository as a resolver while specifying a different repo for publishing the built artifact.

Describe the solution you'd like to see
jf poetry publish should not use the resolving repository as a deploy repository if the -r/--repository option is present.
A user should be able to have one resolving repository, such as a virtual pypi repo without a default deploy repo configured, and instead use a repo configured in poetry.

Describe alternatives you've considered
Not using jf cli and instead using poetry directly.

Additional context
Many jf poetry commands is unusable as a build/publish helper in CI.
See #835 for additional issues.

exclude-test-deps flag in auditing Gradle projects is not working

Describe the bug

Following PR #719, it seems like the --exclude-test-deps flag is not working anymore.
It looks like this flag's value is read, but not used.

Current behavior

The flag does nothing.

Reproduction steps

No response

Expected behavior

The flag should work according to its description in the documentation:

--exclude-test-deps: [Default: false] [Gradle] Set to true if you'd like to exclude Gradle test dependencies from Xray scanning.

JFrog CLI-Core version

2.43.3

JFrog CLI version (if applicable)

No response

Operating system type and version

Mac

JFrog Artifactory version

No response

JFrog Xray version

No response

Using the library silently upgrades local configuration to version 3, which makes it unusable for "jfrog" command

Describe the bug
I have a working JFrog CLI configuration in $HOME/.jfrog/jfrog-cli.conf on our CI/CD server. We are making heavy use of 'jfrog' command in lots of pipelines. As soon as one single pipeline uses the new 'jfrog-cli-core' library instead of relying on an external command, the library silently converts the configuration from version 1 to version 3, making it useless for 'jfrog':

[Error] unexpected end of JSON input

To Reproduce

  • create a local configuration $HOME/.jfrog/jfrog-cli.conf
  • run 'jfrog rt ping' to make sure it is working
  • run any program that makes use of jfrog-cli-core
  • run 'jfrog rt ping' again

Expected behavior
library should not break existing configuration. I mean thats what the 'Version' in jfrog-cli.conf is for, no?

Screenshots
If applicable, add screenshots to help explain your problem.

Versions

  • JFrog CLI core version: 1.0.2
  • JFrog CLI version (if applicable): 1.36.0
  • Artifactory version: 6.11.7 rev 61107900

Additional context
Add any other context about the problem here.

npm-publish does not support the "prepack" lifecycle script

Describe the bug
https://github.com/jfrog/jfrog-cli-core/blob/master/artifactory/utils/npm/pack.go#L16

The above line expects the output of npm pack to be only the filename of the tarball. However, when using the prepack lifecycle script, the output includes information about that script. Because packageFileName is incorrect, the CLI cannot deploy the tarball and throws an error akin to the following:

open /usr/src/app/foo/> @foo/[email protected] prepack /usr/src/app/foo
  > npm run build-lib
  > @foo/[email protected] build-lib /usr/src/app/foo
  > tsc --p tsconfig.lib.json
  foo-bar-9.90.0-alpha.5.tgz: no such file or directory

To Reproduce
Attempt to deploy an npm package via the JFrog CLI npm-publish command with any prepack lifecycle script in package.json.

Expected behavior
The prepack script should run, preparing files for deployment, and the tarball that npm created should be published.

Versions

  • JFrog CLI version (if applicable): 1.50.2

jf docker scan --format=table does not hint on scan_id if no issues are found

Describe the bug

Our workflow depends on the scan_id for several purposes.

With JFrog CLI 2.51.1 there is one change coming with

https://github.com/jfrog/jfrog-cli-core/pull/994/files#diff-0cb3bf0da9f42f148ec1c1a33204b6506de9f1a92bc273c03c9aa9fafb22b15eR135

so that for any call to

jf docker scan --format=table ...

which is returning 0 vulnerabilities, we no longer have access to the scan_id information as we used to have for the table format. ( Using the json format we always get this information directly ).

Now we would need to set JFROG_CLI_LOG_LEVEL=DEBUG and parse stderr to retrieve this information, as this is the
only hint for this piece of information.

See Reproduction steps:

sf-user@sf-dev-tga:~$ grep '/xray/api/v1/scan/graph/' stderr
07:49:49 [Debug] Sending HTTP GET request to: http://192.168.2.7:8082/xray/api/v1/scan/graph/c452b793-255f-434b-5c33-71fd58cd8be4?include_vulnerabilities=true
07:49:54 [Debug] Sending HTTP GET request to: http://192.168.2.7:8082/xray/api/v1/scan/graph/c452b793-255f-434b-5c33-71fd58cd8be4?include_vulnerabilities=true

For several reasons we would prefer not to use this DEBUG hack but get the essential information either in the same way as before or maybe even in a better way.

Current behavior

See Reproduction steps:

sf-user@sf-dev-tga:~$ cat stdout

Vulnerable Components
+-------------------------------------+
| No vulnerable components were found |
+-------------------------------------+

Reproduction steps

sf-user@sf-dev-tga:~$ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE

sf-user@sf-dev-tga:~$ docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
719385e32844: Pull complete
Digest: sha256:88ec0acaa3ec199d3b7eaf73588f4518c25f9d34f58ce9a0df68429c5af48e8d
Status: Downloaded newer image for hello-world:latest
docker.io/library/hello-world:latest

sf-user@sf-dev-tga:~$ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 9c7a54a9a43c 6 months ago 13.3kB

sf-user@sf-dev-tga:~$ JFROG_CLI_LOG_LEVEL=DEBUG jf docker scan 9c7a54a9a43c --format=table > stdout 2> stderr

FYI: For testing an image including vulnerabilities (showing the expected behavior), I have been using

docker pull jenkins/jenkins

Expected behavior

sf-user@sf-dev-tga:~$ cat stdout
The full scan results are available here: /tmp/jfrog.cli.temp.-1699604631-2298637964

Vulnerable Components
+-------------------------------------+
| No vulnerable components were found |
+-------------------------------------+

and some existing file containing the scan_id:

sf-user@sf-dev-tga:~$ grep scan_id /tmp/jfrog.cli.temp.-1699604631-2298637964
"scan_id": "029b9813-bc10-4c86-4b3c-f3df156db27f",

Ideally we would love to see something like this:

sf-user@sf-dev-tga:~$ cat stdout
The full scan results are available here: /tmp/jfrog.cli.temp.-1699604631-2298637964

Vulnerable Components
scan_id: 029b9813-bc10-4c86-4b3c-f3df156db27f
+-------------------------------------+
| No vulnerable components were found |
+-------------------------------------+

JFrog CLI-Core version

v2.46.0

JFrog CLI version (if applicable)

jf version 2.51.1

Operating system type and version

Linux sf-dev-tga 5.10.0-26-cloud-amd64 #1 SMP Debian 5.10.197-1 (2023-09-29) x86_64 GNU/Linux

JFrog Artifactory version

7.46.10

JFrog Xray version

3.61.5

Bump gradle-dep-tree dependency to 3.0.0

Describe the bug

v2.x of the gradle-dep-tree plugin does not handle transient circular dependencies well, and causes a Heap Space error. v3.0.0 fixes this

Current behavior

When running against a project with transient circular deps, jf audit fails

Reproduction steps

No response

Expected behavior

No response

JFrog CLI-Core version

2.37.0

JFrog CLI version (if applicable)

2.42.0

Operating system type and version

MacOS Ventura

JFrog Artifactory version

No response

JFrog Xray version

No response

Improvements for Yarn

Is your feature request related to a problem? Please describe.
There's a very similar code in npm-install and yarn for handling the backup (and restore) process of the config files (.npmrc and .yarnrc.yml).

Describe the solution you'd like to see
Handle these backup files in one place.

Additional context
Also, another small thing that maybe can be improved: when getting npmAuth using access token, the token is concatenated in the npmAuth string and later is extracted from it. This process can be avoided.

audit: Vulnerabilities without a CVE-ID are not scanned by Contextual Analysis

Describe the bug

When running jf audit, the JFrog CLI will show an Undetermined result for vulnerabilities that do not have a CVE ID (only have an XRAY ID), even if contextual scanning of these vulnerabilities is supported when identified by their respective XRAY ID

Current behavior

The CLI only adds CVE IDs to the generated configuration YAML that is passed to applicabilityScanConfig. If a detected vulnerability only has an XRAY-ID (no CVE ID) then it is not passed to be scanned by the contextual analysis.

Reproduction steps

  1. Download jackson-rce-via-spel.zip

  2. Run -

mkdir jackson_test
cd jackson_test
unzip ../jackson-rce-via-spel.zip
jf audit --extended-table
  1. Note that the following vulnerabilities have an "Undetermined" contextual analysis -
  • XRAY-122085
  • XRAY-122084
  • XRAY-138371

Expected behavior

The CLI should add XRAY-IDs (when required) to the generated configuration YAML that is passed to applicabilityScanConfig. Specifically the relevant fields are CveWhitelist and IndirectCveWhitelist.

For example -

cve-whitelist:
        - CVE-2020-11619
       ...
        - XRAY-122085
        - XRAY-122084
        - XRAY-138371

When the XRAY-IDs are passed, the applicability manager will know to return the correct response

In the example above, the following XRAY IDs should show up as "Not Applicable" (instead of "Undetermined") -
- XRAY-122085
- XRAY-122084
- XRAY-138371

JFrog CLI-Core version

2.47.3

JFrog CLI version (if applicable)

2.52.2

Operating system type and version

Linux - Ubuntu 22.04

JFrog Artifactory version

No response

JFrog Xray version

No response

Gradle Args with spaces

Describe the bug

This issue is related to jfrog/jfrog-cli#921 (Mvn args with spaces) and is needed to pass in multiple tags with space with the Gradle command. For example:
-Dkarate.options="--tags=@smoke --tags=~@regression"

#830 is the PR that is created to address this issue. This PR is a dependency on
jfrog/jfrog-cli#2029 getting moved forward as well.

Current behavior

This command fails when being passed as the gradle command using jfrog cli like so
'jfrog rt gradle clean assemble myTest -Dkarate.options="--tags=@smoke --tags=~@regression"'

gives an error that '--tags' is not a command line argument. It is recognizing the first --tags command but not the second due to the space.

Reproduction steps

No response

Expected behavior

Gradle command to be able to accept spaces in arguments in command line so I can pass multiple tags
'jfrog rt gradle clean assemble myTest -Dkarate.options="--tags=@smoke --tags=~@regression'

JFrog CLI-Core version

1.51.1

JFrog CLI version (if applicable)

No response

Operating system type and version

linux

JFrog Artifactory version

No response

JFrog Xray version

No response

SIGSEGV in jfrog-cli-core/artifactory/commands/generic/upload.go

Problem
Panic when trying to upload a buffer to artifactory (equivalent to jfrog rt upload).

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x9006f9]

goroutine 1 [running]:
github.com/jfrog/jfrog-cli-core/artifactory/commands/generic.(*UploadCommand).upload(0xc00010fda8, 0xc0003da560, 0x40)
        ~/go/pkg/mod/github.com/jfrog/[email protected]/artifactory/commands/generic/upload.go:73 +0x79
github.com/jfrog/jfrog-cli-core/artifactory/commands/generic.(*UploadCommand).Run(...)
        ~/go/pkg/mod/github.com/jfrog/[email protected]/artifactory/commands/generic/upload.go:54

Reproduce

Run this code:

        cmd := generic.NewUploadCommand()
        if err := cmd.Run(); err != nil {
                log.Fatal(err)
        }

Expected behavior
The compiler should refuse to build incomplete upload commands. Second best, users of the library should not have to guess what 'GoBeans' / 'POGO' must be used:

func (uc *UploadCommand) SetBuildConfiguration(buildConfiguration *utils.BuildConfiguration) *UploadCommand {
	uc.buildConfiguration = buildConfiguration
	return uc
}

func (uc *UploadCommand) UploadConfiguration() *utils.UploadConfiguration {
	return uc.uploadConfiguration
}

Additional context
Maybe i am missing documentation or examples somewhere?

panic when running build-docker-create command if image name does not include slash or colon

Describe the bug
A clear and concise description of what the bug is.

return path.Join(image.tag[indexOfFirstSlash:], "latest")

This line will panic if image.tag does not contain any slashes or colons.

To Reproduce
Steps to reproduce the behavior

Run the jfrog rt bdc command with an image file where the name doesn't include a slash.

Expected behavior
A clear and concise description of what you expected to happen.

It should give an error about the invalid input, but should not crash.

Screenshots
If applicable, add screenshots to help explain your problem.

Versions

  • JFrog CLI core version: 1.4.0
  • JFrog CLI version (if applicable): 1.45.0
  • Artifactory version: n/a

Additional context
Add any other context about the problem here.

Inclusion of maven-dep-tree has indirectly increased minimum Maven version requirement for jfrog-cli

Describe the bug

When using jf audit with the JFrog CLI, versions newer than 2.51.1 require a minumum Maven version of 3.6.3 due to the inclusion of maven-dep-tree. This results in an the following error when attempting to run the jf audit command using an older version of Maven if using a JFrog CLI version newer than 2.51.1.

The plugin com.jfrog:maven-dep-tree:1.0.2 requires Maven 3.6.3
The plugin com.jfrog:maven-dep-tree:1.0.10 requires Maven 3.6.3

It appears this impacts any Frog CLI version released after November 19th 2023 when PR #1023 was merged. That is, any jfrog-cli version newer than 2.51.1 as it includes the breaking change. Based on this, the first impacted version of jfrog-cli is 2.52.0.

The dependency version was also bumped in PR #1097 from 1.0.2 to 1.0.10.

The maven.min.version definition in the pom.xml file that specifies Maven 3.6.3 is in the plugin repository here
https://github.com/jfrog/maven-dep-tree/blob/main/pom.xml#L19

Current behavior

jf audit produces the following error when running on a version of Maven less than 3.6.3 (two different examples as dependency version has been bumped)

The plugin com.jfrog:maven-dep-tree:1.0.2 requires Maven 3.6.3
The plugin com.jfrog:maven-dep-tree:1.0.10 requires Maven 3.6.3

Reproduction steps

  1. Install JFrog CLI newer than 2.51.1 on a system with Maven older than 3.6.3 (eg. Red Hat Enterprise Linux 8)
  2. Execute the JFrog CLI jf audit command with correct options/parameters
  3. Command will fail due to Maven not being at required 3.6.3 version for maven-dep-tree dependency

Expected behavior

Expected behaviour and potential actions to resolve the issue:

  1. That the command executes correctly on older versions of Maven.
    Although the official Maven support states that versions older than 3.6.3 are now out of support, there may be Enterprise customers using RHEL and derivatives which still ship with OS included 3.5.4 that is actively supported via backports by the OS vendor. It may also be unfeasible so support versions this old, which could be documented.

  2. That the version requirement in maven-dep-tree is determined to be higher than technically necessary, and it is lowered to match the core JFrog CLI components so that it doesn't increase the minimum Maven requirement, and new versions of JFrog CLI will continue to work on older Maven versions until there is a technical requirement pushing the Maven version up.

  3. That the requirement for minimum version of Maven 3.6.3 is documented and defined in the JFrog CLI dependencies so that it doesn't surface to the end user through a plugin install error, but instead presents as a requirement for JFrog CLI at installation/execution time.

JFrog CLI-Core version

Version included in JFrog CLI > 2.51.1

JFrog CLI version (if applicable)

> 2.51.1

Operating system type and version

Red Hat Enterprise Linux

JFrog Artifactory version

N/A

JFrog Xray version

N/A

Missing "ForceNugetAuthentication" in repository template

Describe the bug
Not possible to create a nuget local or remote repository with the "ForceNugetAuthentication" option enabled.
When running jfrog rt rpt, it doesn't offer this option in the "wizard"

To Reproduce

  1. Run jfrog rt rtc nuget-template.json
  2. Select "Create" template
  3. Select "local" or "remote"
  4. Select "nuget"

Expected behavior
Being able to create a Nuget repository with all the option available on the UI

Screenshots
If applicable, add screenshots to help explain your problem.

Versions

  • JFrog CLI core version: 2.8.3
  • JFrog CLI version (if applicable): 2.11.0
  • Artifactory version: 7.31.13

Additional context
After investigating, it might be related to a typo error here
shouldn't it be "ForceNugetAuthentication" instead of "ForceMavenAuthentication"

Exclude replications when excluding repositories

Is your feature request related to a problem? Please describe.
the transfer-config command allows to skip repositories via the --exclude-repos options. However it doesn't skip the replications set on those excluded repositories (local and remote) which breaks the import on the target server and end up with the following error
[ERROR] Failed system import: ConfigurationException: Found replication for missing remote repo <MY_REMOTE_REPO_KEY>

Describe the solution you'd like to see
The CLI should be smart enough to exclude replications related to excluded repositories

Describe alternatives you've considered
NA

Additional context
NA

Improve the error log on build-publish with project when build is not connected to project

When uploading a file with a build flag and without a project flag, for example:
jfrog rt u web3.xml RepoName --build-name="buildName" --build-number=1
and then creating a build-publish command with a project flag:
jfrog rt bp buildName 1 --project="myproj"
The command fails and the following error log is shown on the screen:
[Error] open /var/folders/9s/jyd3129n247dt9d007btkrxw0000gn/T/jfrog/builds/bWljQmxkM185X21pY1Rlc3Q=/partials/details: no such file or directory
The reason behind the failure is that the cli is trying to find the partials dir with a name generated from the project name, in the createBuildInfoFromPartials function.

I think that the error log on the cli should be much clearer and more explanatory.

The jf c show command should include the actual token ID

Is your feature request related to a problem? Please describe.
The jf c show command should include the actual token ID In the "Access token:..." output.
Currently it is identical regardless of which token is in effect at the time.

Describe the solution you'd like to see
jf c show command should include the actual token ID

Describe alternatives you've considered
NA

Additional context
NA

jf poetry install runs poetry update

Describe the bug

I am trying to use jf cli to install the python dependencies. When I run the jf poetry install, it runs internally poetry update, which is not intended since it is used in the Pipeline and I don't want my lock file to be updated.

What I see is that there is always a call chain as follow
Run --> SetPypiRepoUrlWithCredentials --> ConfigPoetryRepo --> addRepoToPyprojectFile which runs a poetry update command everytime.

Current behavior

👾 install:ci | jf poetry-config --repo-resolve $JFROG_PLATFORM_PYPI_REPO
16:32:19 [Debug] JFrog CLI version: 2.38.4
16:32:19 [Debug] OS/Arch: linux/amd64
16:32:19 [Info] poetry build config successfully created.
👾 install:ci | jf poetry install --sync
16:32:19 [Debug] JFrog CLI version: 2.38.4
16:32:19 [Debug] OS/Arch: linux/amd64
16:32:19 [Debug] Preparing to read the config file /builds/test/folder/solution-teams/the-awesome-team/folder-awscdk-python-app-poetry2/.jfrog/projects/poetry.yaml
16:32:19 [Debug] Found resolver in the config file /builds/test/folder/solution-teams/the-awesome-team/folder-awscdk-python-app-poetry2/.jfrog/projects/poetry.yaml
16:32:19 [Info] Running Poetry install.
16:32:19 [Debug] Preparing build prerequisites...
16:32:19 [Debug] Saving build general details at: /tmp/jfrog/builds/1339a8cb9483d833fdde458e1b4402202949ab77a6328d843980fd78485f1d55/partials
16:32:19 [Info] Running Poetry config repositories.jfrog-server https://url-of-the-artifactory/artifactory/api/pypi/tat-pypi/simple
16:32:19 [Debug] Usage Report: Sending info...
16:32:19 [Debug] Sending HTTP GET request to: https://url-of-the-artifactory/artifactory/api/system/version
16:32:19 [Debug] Artifactory response: 200
16:32:19 [Debug] JFrog Artifactory version is: 7.59.9
16:32:19 [Debug] Sending HTTP POST request to: https://url-of-the-artifactory/artifactory/api/system/usage
16:32:19 [Debug] Usage Report: Usage info sent successfully. Artifactory response: 200
16:32:19 [Info] Running Poetry config ***
Using a plaintext file to store credentials
16:32:20 [Info] Added tool.poetry.source name:"jfrog-server" url:"https://url-of-the-artifactory/artifactory/api/pypi/tat-pypi/simple"
16:32:20 [Info] Running Poetry update
Updating dependencies
Resolving dependencies...

Writing lock file

Package operations: 22 installs, 0 updates, 0 removals

• Installing attrs (23.1.0)
• Installing exceptiongroup (1.1.1)
• Installing six (1.16.0)
• Installing cattrs (22.2.0)
• Installing importlib-resources (5.12.0)
• Installing python-dateutil (2.8.2)
• Installing typeguard (2.13.3)
• Installing publication (0.0.3)
• Installing typing-extensions (4.6.3)
• Installing iniconfig (2.0.0)
• Installing jsii (1.83.0)
• Installing packaging (23.1)
• Installing pluggy (1.0.0)
• Installing tomli (2.0.1)
• Installing aws-cdk-asset-awscli-v1 (2.2.189)
• Installing aws-cdk-asset-kubectl-v20 (2.1.1)
• Installing aws-cdk-asset-node-proxy-agent-v5 (2.0.163)
• Installing constructs (10.2.52)
• Installing pytest (7.3.1)
• Installing coverage (7.2.7)
• Installing aws-cdk-lib (2.83.1)
• Installing pytest-cov (4.1.0)
/root/.cache/pypoetry/virtualenvs/folder-awscdk-python-app-poetry2-2HAYbF5F-py3.10
Installing dependencies from lock file

Finding the necessary packages for the current system

Package operations: 0 installs, 0 updates, 0 removals, 22 skipped

• Installing attrs (23.1.0): Skipped for the following reason: Already installed
• Installing aws-cdk-asset-awscli-v1 (2.2.189): Skipped for the following reason: Already installed
• Installing aws-cdk-asset-kubectl-v20 (2.1.1): Skipped for the following reason: Already installed
• Installing aws-cdk-asset-node-proxy-agent-v5 (2.0.163): Skipped for the following reason: Already installed
• Installing aws-cdk-lib (2.83.1): Skipped for the following reason: Already installed
• Installing cattrs (22.2.0): Skipped for the following reason: Already installed
• Installing constructs (10.2.52): Skipped for the following reason: Already installed
• Installing exceptiongroup (1.1.1): Skipped for the following reason: Already installed
• Installing coverage (7.2.7): Skipped for the following reason: Already installed
• Installing importlib-resources (5.12.0): Skipped for the following reason: Already installed
• Installing pytest (7.3.1): Skipped for the following reason: Already installed
• Installing pluggy (1.0.0): Skipped for the following reason: Already installed
• Installing iniconfig (2.0.0): Skipped for the following reason: Already installed
• Installing pytest-cov (4.1.0): Skipped for the following reason: Already installed
• Installing tomli (2.0.1): Skipped for the following reason: Already installed
• Installing packaging (23.1): Skipped for the following reason: Already installed
• Installing typing-extensions (4.6.3): Skipped for the following reason: Already installed
• Installing six (1.16.0): Skipped for the following reason: Already installed
• Installing publication (0.0.3): Skipped for the following reason: Already installed
• Installing python-dateutil (2.8.2): Skipped for the following reason: Already installed
• Installing typeguard (2.13.3): Skipped for the following reason: Already installed
• Installing jsii (1.83.0): Skipped for the following reason: Already installed

Installing the current project: folder-awscdk-python-app-poetry2 (0.0.0)

Reproduction steps

No response

Expected behavior

No response

JFrog CLI-Core version

2.34.7

JFrog CLI version (if applicable)

2.38.4

Operating system type and version

mac, linux

JFrog Artifactory version

7.59.9

JFrog Xray version

No response

improvement for transfer-config : specify a target path for the generated export

Is your feature request related to a problem? Please describe.
The transfer-config generates a system export on the OS temp folder (see here). This can be troublesome when :

  • artifactory is running in a container
  • I can't install / am not authorized to install anything in the container

Describe the solution you'd like to see
when running the command, it'd be nice to specify the target path where to save the generated export. That path could then be a docker volume mounted on the host machine and so accessible outside the container

Describe alternatives you've considered
we could also use the JFROG_CLI_TEMP_DIR env variable like here

Additional context
It will be very useful for K8S installation which will be using the CLI in a side car container

jfrog rt dotnet restore inconsistently fails with nuget.org connection timeout

Describe the bug
Full transcript at jfrog/jfrog-cli#1011.
jfrog rt dotnet restore uses v2 protocol by default if config is not explicitly supplied. This issue can be seen in the artifactory-service logs with packages being pulled from the v2 nuget endpoint.

Contacted jfrog support who confirmed that v2 is unreliable, and v3 should be used.

Confirmed the using dotnet cli or nuget cli correctly uses v3 endpoint.

When configuring the temp NuGet.config file, jfrog cli omits required protocolVersion="3".

In

func (dc *DotnetCommand) InitNewConfig(configDirPath string) (configFile *os.File, err error) {
, the code builds the source url, setting v3 (default) or v2 nuget feed depending on flag. This part works well, but an additional attribute protocolVersion="3" in NuGet.config is required for a v3 feed. It isn't enough just to set the correct sourceUrl.

The difficulty here is that the native nuget cli (first branch in code below) doesn't support specifying protocolVersion. The config file has to be edited after the fact. As such it may be better to remove the 'nuget cli' branch, and inject the protocolVersion into code using a formatted string. Modifying the constant dotnet.ConfigFileFormat to explicitly include the protocol version string, then setting it to 2 or 3 depending on the dc.useNugetV2 value.

	if dc.useNugetAddSource {
		err = dc.AddNugetAuthToConfig(dc.toolchainType, configFile, sourceUrl, user, password)
	} else {
		_, err = fmt.Fprintf(configFile, dotnet.ConfigFileFormat, sourceUrl, user, password)
	}

To Reproduce
Unable to provide code to reproduce as we experience this as part of our Cloud Artifactory setup. Our configuration is:

  • A local nuget repository - default configuration
  • A remote nuget repository - default configuration pointing to nuget.org
  • A virtual nuget repository that includes the above two repositories
  • All repos set for private (auth) access

When running the following in GHA

jfrog rt dotnet-config --repo-resolve "${{ env.NUGET_JFROG_REPO }}" --server-id-resolve "${{ secrets.JFROG_SERVER }}"
jfrog rt dotnet restore $SLN_NAME

Consistently fails with the following error (although not always the same package)

Error : The feed 'JFrogCli [https://***.jfrog.io/***/api/nuget/***]' lists package 'Microsoft.AspNetCore.Hosting.2.2.7' but multiple attempts to download the nupkg have failed. The feed is either invalid or required packages were removed while the current operation was in progress. Verify the package exists on the feed and try again. 

Artifactory Service Log shows (sample - multiple timeout exceptions present):
2021-03-16T08:26:10.648Z [jfrt ] [WARN ] [e6edd6ff677f9b80] [o.a.r.RemoteRepoBase:505 ] [p-nio-8081-exec-5788] - nuget-remote: Error in getting information for 'System.Reflection.Metadata.1.6.0.nupkg' (Failed retrieving resource from https://www.nuget.org/api/v2/package/System.Reflection.Metadata/1.6.0: Connect to globalcdn.nuget.org:443 [globalcdn.nuget.org/152.199.23.209] failed: connect timed out).

Artifactory Request Log shows 404:
2021-03-16T08:26:10.649Z|***|20.186.104.***|***-user|GET|/api/nuget/***/Download/System.Reflection.Metadata/1.6.0|404|-1|0|30842|NuGet .NET Core MSBuild Task/5.7.0 (Linux 5.4.0-1040-azure #42-Ubuntu SMP Fri Feb 5 15:39:06 UTC 2021)

Expected behavior
Artifactory Service Log should show that v3 endpoint is being consumed.
jfrog rt dotnet restore should succeed and experience no timeout issues when hitting nuget.org remote repo.

Screenshots
If applicable, add screenshots to help explain your problem.

Versions

  • JFrog CLI core version:
  • JFrog CLI version (if applicable):
  • Artifactory version:

Additional context

  • JFrog CLI version: 1.39.7 (from jfrog/setup-jfrog-cli@v1)
  • JFrog CLI operating system: Ubuntu 20.04.2 LTS
  • Artifactory Version: Cloud / SaaS Artifactory

jf mvn ignores .mvn/jvm.config, .mvn/settings.xml

Describe the bug

Like jfrog/jenkins-artifactory-plugin#704 but for jf mvn, and it's the entirety of the .mvn directory that jf mvn seems to ignore. For .mvn/jvm.config that means no in-process compiler with Error Prone or Checker Framework (except with manual configuration of JFrog), for .mvn/settings.xml it means one avenue of mirrors setup is unavailable (and I wonder if JFrog's explicit -s means that global user settings are also ignored...?).

JFrog CLI-Core version

2

Operating system type and version

File-Locking for ~/.jfrog/jfrog-cli.conf.v5 is unreliable

Describe the bug
When concurrently executing configuration commands from jfrog cli, the config file in ~/.jfrog/jfrog-cli.conf.v5 may become corrupted. This causes all jfrog cli commands that are executed later to fail with the error message "[Error] invalid character 't' after top-level value" (See corrupted json below under 'Screenshots')

To Reproduce
The relevant operations are:

  1. lock.currentTime = time.Now().UnixNano()
  2. file, err := os.OpenFile(lock.fileName, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
  3. filesList, err := fileutils.ListFiles(filepath.Dir(lock.fileName), false)

The files in the lockfile directory are sorted according to the timestamp embedded in the filename. The oldest file containing a PID that is still running "wins" the lock and may go ahead to change the config file.
A possible cause for the observed behavior is the following sequence of events where two processes P1 and P2 want to access the config file concurrently:

  • P1: time.now()
  • P2: time.now()
  • P2: os.OpenFile()
  • P2: fileutils.ListFiles() -> yay I'm the oldest one -> goes ahead to change the config file
  • P1: os.OpenFile()
  • P1: fileutils.ListFiles() -> yay I'm the oldes... ☠️

Expected behavior
The lockfile mechanism works reliable.

Screenshots

$ cat ~/.jfrog/jfrog-cli.conf.v5
{
  "servers": [],
  "version": "5"
}toryUrl": "*redacted*",
      "user": "*redacted*",
      "password": "*redacted*",
      "serverId": "*redacted*",
      "isDefault": true
    }
  ],
  "version": "5"
}

Versions

  • JFrog CLI core version: master
  • JFrog CLI version (if applicable): 2.12.1
  • Artifactory version: 7.33.12

Additional context
We have multiple Azure-Pipeline Agents running on the same self-hosted Ubuntu 20.04.3 machine, executing tasks from the Jfrog-Azure-Devops Plugin in parallel. The plugin uses jfrog-cli internally.

Possible Solution
Use file-creation or file-modification time from the filesystem as the sorting key. Since this might have lower resolution, a collision of identical timestamps has a higher probability. When a process recognizes, that two files have the same timestamp, the process can remove their own file, wait a random amount of time (This is important to prevent two processes going back-and-forth forever), and retry the operation.

Add correct module type for remote dependencies added by build-add-dependencies command

Is your feature request related to a problem? Please describe.
#325 Added a Generic module type setting to all modules added by the command, for both local and remote. The module type was missing up until that point.

Describe the solution you'd like to see
For remote dependencies, the module type can be determined by the type of the repository from which the dependency was resolved.

Docker - Optimize Docker collect build info from remote repository

The process of collection build info when pulling an image from a remote repository may encounter fat-manifest.
One step in the whole process involved using the command docker manifest inspect which is an experimental docker command.
To avoid this, the command docker manifest inspect must be replaced with the same logic that is implemented in Artifactory Jenkins Plugin.

Using 'docker push' to overwrite a tag fails to collect correct build-infos

Describe the bug

Hello JFrog Team,

we just noticed an interesting change in behavior of the JFrog CLI that appeared with the upgrade of the CLI from major version 1.x to 2.x. For us this causes a regression when we try to promote a docker image tag that already existed and that was overwritten in a recent push. This now leads to a warning and no build-info metadata being attached to the image, which causes the image layers to be not correctly promoted.

Our setup is as following, we have three repositories:

  1. A virtual repo (i.e., docker).
  2. Two local repos (i.e, docker-dev, docker-stable).
  3. The virtual repo references the local repos in the order stable, then dev. Thus it stable images will shadow dev images, this could be part of the issue we are seeing.

We push an image with a tag such as foobar:latest which will first appear in docker-dev and then we want to promote it to docker-stable. Then later we run another pipeline, push, and try to promote to stable again, overwriting the older tag. But after the pipeline completes, we are still seeing the original image tag in the stable repository, while the new tag remains in the dev repository.

This used to work fine with the CLI version 1.x, However, the docker push command accepted a repository name in the 1.x branch, but as far as I can see it no longer allows this argument in the 2.x branch. So we removed this argument and now the following happens:

  1. When we call jf docker push ... the image is pushed and a warning is emitted:

    12:41:11 [Warn] Failed to collect build-info, couldn't find image ".../foobar:latest" in Artifactory

    This warning seems to be caused by the following code: d45734b#diff-e2f77381b07f59446c81eef9ee262f955593314bd6328e0ed5fa4f8e350c0079R55

  2. The docker image is correctly pushed, but the build-info contains no modules. When we then try to promote the build to the stable stage, the image layers remain in the dev stage.

I am not sure how we should fix this, or if this is even intentional behavior of the JFrog CLI. The problem, was far as I understand seems to come from the fact, that the repository is now determined using a call to the REST API and this API seems to return the docker-stable repository, since that one appears first in the order of the virtual repository and already contains an older image tag but the hashsum of that image does not match the one we just pushed. The result is that we are not attaching any build-info to the newly pushed image tag.

Current behavior

As described above, the build-info metadata is not correctly attached to the image tag and it is not correctly promoted to the stable docker repository stage as a result.

Reproduction steps

No response

Expected behavior

The image correctly receives the build-info metadata.

JFrog CLI-Core version

2.28.0

JFrog CLI version (if applicable)

2.28.1

Operating system type and version

Linux

JFrog Artifactory version

7.49.10

JFrog Xray version

No response

Support for "docker buildx bake" command with 'jf rt build-docker-create'

While you can use docker buildx build and specify a tag on the command line and push the image, then use the jf rt build-docker-create command to push the resulting images using the build-metadata file generated by the docker buildx bake command.

Simply create a docker-bake.hcl file like the following:

group "linux" {
  targets = [
    "java-builder-ubuntu-jammy-java-11",
    "java-builder-ubuntu-jammy-java-17",
  ]
}

variable "REGISTRY" {
  default = "INSERT YOUR ARTIFACTORY URL HERE"
}

variable "REPOSITORY" {
  default = "INSERT YOUR REPOSITORY HERE"
}

target "ubuntu-jammy-java-11" {
  dockerfile = "Dockerfile"
  args = {
     JAVA_VERSION="11"
     VERSION="jammy"
  }
}

target "ubuntu-jammy-java-17" {
  dockerfile = "Dockerfile"
  args = {
     JAVA_VERSION="17"
     VERSION="jammy"
  }
}

target "java-builder-ubuntu-jammy-java-11" {
  dockerfile = "Dockerfile.java"
  contexts = {
    base-image = "target:ubuntu-jammy-java-11"
  }
  tags = [
    "${REGISTRY}/${REPOSITORY}/java-builder:java-11-latest",
    "${REGISTRY}/${REPOSITORY}/java-builder:linux-11-latest",
    "${REGISTRY}/${REPOSITORY}/java-builder:ubuntu-java-11-latest",
  ]
  platforms = ["linux/amd64"]
}

target "java-builder-ubuntu-jammy-java-17" {
  dockerfile = "Dockerfile.java"
  contexts = {
    base-image = "target:ubuntu-jammy-java-17"
  }
  tags = [
    "${REGISTRY}/${REPOSITORY}/java-builder:java-17-latest",
    "${REGISTRY}/${REPOSITORY}/java-builder:linux-latest",
    "${REGISTRY}/${REPOSITORY}/java-builder:linux-17-latest",
    "${REGISTRY}/${REPOSITORY}/java-builder:ubuntu-java-latest",
    "${REGISTRY}/${REPOSITORY}/java-builder:ubuntu-java-17-latest",
    "${REGISTRY}/${REPOSITORY}/java-builder:latest"
  ]
  platforms = ["linux/amd64"]
}

Then create two Dockerfile called Dockerfile and Dockerfile.java

Dockerfile

ARG VERSION=jammy
FROM ubuntu:${VERSION}

# Get basic tools to create image
ARG DEBIAN_FRONTEND=noninteractive
RUN    apt-get update \
	&& apt-get install -y gnupg curl java-common procps tzdata locales apt-utils lsb-release \
	&& apt-get -y clean \
	&& rm -rf /var/lib/apt/lists/*

# Setup the timezone and reconfigure locale
ENV TZ=America/Los_Angeles
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' LC_ALL='en_US.UTF-8'
RUN locale-gen en_US en_US.UTF-8
RUN dpkg-reconfigure locales 

Dockerfile.java

FROM base-image

ARG JAVA_VERSION=17
RUN curl -fsSL https://apt.corretto.aws/corretto.key | gpg --dearmor | tee /usr/share/keyrings/amazon-keyring.gpg > /dev/null \
    && echo "deb [signed-by=/usr/share/keyrings/amazon-keyring.gpg] https://apt.corretto.aws stable main" | tee /etc/apt/sources.list.d/corretto.list > /dev/null \
	&& apt-get update \
	&& apt-get install -y java-$JAVA_VERSION-amazon-corretto-jdk \
	&& apt-get -y clean \
	&& rm -rf /var/lib/apt/lists/* /usr/share/keyrings/amazon-keyring.gpg /etc/apt/sources.list.d/corretto.list

ENV JAVA_HOME="/usr/lib/jvm/java-${JAVA_VERSION}-amazon-corretto"
ENV PATH="${JAVA_HOME}/bin:${PATH}"

Now run the docker bake command:
docker buildx bake --file docker-bake.hcl --metadata-file=build-metadata --push linux

Trying to run the jf rt build-docker-create <repository> --server-id=<server-id> --image-file build-metadata --build-name MyBuild --build-number 1 fails complaining about the presence of a image 256 sha sum.

Looking a bit further, the error message
unexpected file format "build-metadata". The file should include one line in the following format: image-tag@sha256

Appears in the jfrog-cli-core project in artifactory/utils/container/buildinfo.go line 165. My guess is that this command doesn't take into account the build-metadata can be yet another sub-level deep e.g. wrapped with the "target" { ... } of the image. And even if it did, the list of tags inside the image.name property is a comma separated list.

When we do build of multiple containers, the docker buildx bake offers a contextual "inheritance" so to speak where you can use FROM base-image inside a secondary Dockerfile to model a "extends" without requiring this image to be tagged first. Also, being able to create multiple containers in one go is extremely useful to avoid the overhead of building each container by itself.

Maybe in the short term some magic using jq could help create multiple files on the fly by massaging the content of the meta-data file and then adding it, but it isn't apparent that multiple sequential executions of build-docker-create will append to each-other, so who knows... Re-writing my entire Jenkinsfile to support this seems silly at this point

Please consider this feature request soon.

Deploying Npm Package - (jf-cli 2.55.0) Extracting info from npm package fails due to wrong path supplied

Describe the bug

Summary

Currently experience some issues with the "Extracting info" steps in the NPM publish use-case.

At the step where "Extracting info from npm package" debug messages occurs.
The file-path to the npm tarball is supplied is in a strange format (not the path to the tarball), which results in the whole npm publish operation failing.

jf-cli 2.37.3: (Last known working state)
12:49:30 [Debug] Extracting info from npm package: /home/vsts/work/1/s/my-scope-my-package-1.0.1.tgz

jf-cli 2.55.0: (Tested with latest available)
12:43:32 [Debug] Extracting info from npm package: /home/vsts/work/1/s/> @my-scope/[email protected] prepack

Just downgrading the Jfrog CLI tool 2.37.3 temporarily addressed our issue, no other adjustment where made in our build environment.

Background:

The repository/project is based on a common CLI framework called *oclif https://oclif.io/

Additional info:

Node: /opt/hostedtoolcache/node/20.12.1/x64/bin

package.json

"scripts": {
...
  "prepack": "oclif manifest && oclif readme",
  "postpack": "shx rm -f oclif.manifest.json",
...
},

Let me know if you need any further details to narrow down the root cause.

Current behavior

Environment

CI/CD: Azure DevOps
Build Agent: https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2204-Readme.md

Logs (Using jf-cli 2.55.0)

Starting: Pack and Publish
==============================================================================
Task         : JFrog npm
Description  : Install, pack and publish npm packages from and to Artifactory while allowing to collect build-info. The collected build-info can be later published to Artifactory by the "JFrog Publish Build Info" task.
Version      : 1.9.4
Author       : JFrog
Help         : [More Information](https://github.com/jfrog/jfrog-azure-devops-extension#JFrog-npm-Task)
==============================================================================
Found tool in cache: jf 2.55.0 x64
Running jfrog-cli from /opt/hostedtoolcache/jf/2.55.0/x64/jf
JFrog CLI version: 2.55.0
Executing JFrog CLI Command:

/opt/hostedtoolcache/jf/2.55.0/x64/jf c add "my-package_20240410.10_npmpack and publish_deployer_1712753009208" --artifactory-url="https://myinstance.jfrog.io/artifactory" --interactive=false --access-token-stdi
12:43:29 [Debug] JFrog CLI version: 2.55.0
12:43:29 [Debug] OS/Arch: linux/amd64
12:43:29 [Debug] Using access-token provided via Stdin
12:43:29 [Debug] Locking config file to run config AddOrEdit command.
12:43:29 [Debug] Creating lock in: /home/vsts/.jfrog/locks/config
12:43:29 [Debug] Releasing lock: /home/vsts/.jfrog/locks/config/jfrog-cli.conf.lck.2771.1712753009219756408
12:43:29 [Debug] Config AddOrEdit command completed successfully. config file is released.
Executing JFrog CLI Command:
/opt/hostedtoolcache/jf/2.55.0/x64/jf c use "my-package_20240410.10_npmpack and publish_deployer_1712753009208"
12:43:29 [Debug] JFrog CLI version: 2.55.0
12:43:29 [Debug] OS/Arch: linux/amd64
12:43:29 [Debug] Locking config file to run config Use command.
12:43:29 [Debug] Creating lock in: /home/vsts/.jfrog/locks/config
12:43:29 [Info] Using server ID 'my-package_20240410.10_npmpack and publish_deployer_1712753009208' (https://myinstance.jfrog.io/artifactory/)
12:43:29 [Debug] Releasing lock: /home/vsts/.jfrog/locks/config/jfrog-cli.conf.lck.2778.1712753009233109224
12:43:29 [Debug] Config Use command completed successfully. config file is released.
Executing JFrog CLI Command:
/opt/hostedtoolcache/jf/2.55.0/x64/jf npmc --server-id-deploy="my-package_20240410.10_npmpack and publish_deployer_1712753009208" --repo-deploy="npm-local"
12:43:29 [Debug] JFrog CLI version: 2.55.0
12:43:29 [Debug] OS/Arch: linux/amd64
12:43:29 [Info] npm build config successfully created.
Executing JFrog CLI Command:
/opt/hostedtoolcache/jf/2.55.0/x64/jf npm p
12:43:29 [Debug] JFrog CLI version: 2.55.0
12:43:29 [Debug] OS/Arch: linux/amd64
12:43:29 [Debug] Using npm executable: /opt/hostedtoolcache/node/20.12.1/x64/bin/npm
12:43:29 [Debug] Running 'npm --version' command.
12:43:29 [Debug] npm '--version' standard output is:
10.5.0
12:43:29 [Debug] Preparing to read the config file /home/vsts/work/1/s/.jfrog/projects/npm.yaml
12:43:29 [Debug] Found deployer in the config file /home/vsts/work/1/s/.jfrog/projects/npm.yaml
12:43:29 [Info] Running npm Publish
12:43:29 [Debug] Working directory set to: /home/vsts/work/1/s
12:43:29 [Debug] Reading Package Json.
12:43:29 [Debug] Usage Report: Sending info...
12:43:29 [Debug] Sending HTTP GET request to: https://myinstance.jfrog.io/artifactory/api/repositories/npm-local
12:43:29 [Debug] Sending HTTP GET request to: https://myinstance.jfrog.io/artifactory/api/system/version
12:43:29 [Debug] Setting Package Info.
12:43:29 [Debug] Creating npm package.
12:43:29 [Debug] Artifactory response: 200
12:43:29 [Debug] JFrog Artifactory version is: 7.83.1
12:43:29 [Debug] Sending HTTP POST request to: https://myinstance.jfrog.io/artifactory/api/system/usage
npm notice
npm notice 📦  @my-scope/[email protected]
npm notice === Tarball Contents ===
...
npm notice 1.9kB package.json
npm notice === Tarball Details ===
npm notice name:          @my-scope/my-package
npm notice version:       1.0.1
npm notice filename:      my-scope-my-package-1.0.1.tgz
npm notice package size:  4.1 kB
npm notice unpacked size: 13.5 kB
npm notice shasum:        a191d3bbade3dc56b8127c44afdfa7a1a4748241
npm notice integrity:     sha512-R/CLREn4N9nnJ[...]0WQdnFc1pDrcA==
npm notice total files:   26
npm notice
12:43:32 [Debug] Deploying npm package.
12:43:32 [Debug] Extracting info from npm package: /home/vsts/work/1/s/> @my-scope/[email protected] prepack
12:43:32 [Error] open /home/vsts/work/1/s/> @my-scope/[email protected] prepack: no such file or directory
{
  "status": "failure",
  "totals": {
    "success": 0,
    "failure": 0
  }
}
remove /home/vsts/work/1/s/> @my-scope/[email protected] prepack: no such file or directory
##[error]Error: Command failed: /opt/hostedtoolcache/jf/2.55.0/x64/jf npm p
Executing JFrog CLI Command:
/opt/hostedtoolcache/jf/2.55.0/x64/jf c remove "my-package_20240410.10_npmpack and publish_deployer_1712753009208" --quiet
12:43:32 [Debug] JFrog CLI version: 2.55.0
12:43:32 [Debug] OS/Arch: linux/amd64
12:43:32 [Debug] Locking config file to run config Delete command.
12:43:32 [Debug] Creating lock in: /home/vsts/.jfrog/locks/config
12:43:32 [Debug] Releasing lock: /home/vsts/.jfrog/locks/config/jfrog-cli.conf.lck.2867.1712753012030718308
12:43:32 [Debug] Config Delete command completed successfully. config file is released.

Reproduction steps

No response

Expected behavior

No response

JFrog CLI-Core version

2.50.0

JFrog CLI version (if applicable)

2.55.0

Operating system type and version

linux/amd64

JFrog Artifactory version

7.83.1

JFrog Xray version

No response

When running docker scan in a folder containing .jfrog/jfrog-apps-config.yml the wrong folder is passed to JAS scanner

Describe the bug

Stumbled when working on #1035

The file was added here that broke our IDEs working on it: https://github.com/jfrog/jfrog-cli/blob/dev/.jfrog/jfrog-apps-config.yml

The issue is that if you run docker scan in a folder that has .jfrog/jfrog-apps-config.yml the folder is taken to JAS scan and not the docker container.

Current behavior

Take a look at:
https://github.com/jfrog/jfrog-cli-core/blame/dev/xray/commands/audit/jas/common.go#L73

func createJFrogAppsConfig(workingDirs []string) (*jfrogappsconfig.JFrogAppsConfig, error) {
	if jfrogAppsConfig, err := jfrogappsconfig.LoadConfigIfExist(); err != nil {
		return nil, errorutils.CheckError(err)
	} else if jfrogAppsConfig != nil {
		// jfrog-apps-config.yml exist in the workspace
		return jfrogAppsConfig, nil // RETURN WITHOUT TAKING IN TO ACCOUNT workingDirs IN DOCKER SCAN
	}

	// jfrog-apps-config.yml does not exist in the workspace
	fullPathsWorkingDirs, err := coreutils.GetFullPathsWorkingDirs(workingDirs)
	if err != nil {
		return nil, err
	}
	jfrogAppsConfig := new(jfrogappsconfig.JFrogAppsConfig)
	for _, workingDir := range fullPathsWorkingDirs {
		jfrogAppsConfig.Modules = append(jfrogAppsConfig.Modules, jfrogappsconfig.Module{SourceRoot: workingDir})
	}
	return jfrogAppsConfig, nil
}

workingDirs is not taken in to account if the config file exists. In the case of docker scan the current dir is not passed but a docker .tar file.

Reproduction steps

Run jf docker scan [container] in the jfrog-cli project

Expected behavior

The container should be scanned. AKA the yaml file passed should include the docker tar file

JFrog CLI-Core version

dev

JFrog CLI version (if applicable)

dev

Operating system type and version

OS X 14

JFrog Artifactory version

No response

JFrog Xray version

No response

Workaround

Run in a different folder the docker scan

error while getting docker repository name. Artifactory response: 403 Forbidden

Describe the bug

jf docker push return 403 with block policy.
18751703048573_ pic
I guess this is an Accept head issue of - https://github.com/jfrog/jfrog-cli-core/blame/39b06f70b0887855cf8fc692a4935da014e23fe8/artifactory/utils/container/image.go#L135
18911703060476_ pic
18881703060350_ pic
image

Current behavior

jf docker push failed.
$ jf docker push xx/docker-local/test:v1
...
v1: digest: sha256:e7f0bc939c6fdc86f737267b3d52412401e82c711251f70a02a6bc5bab509477 size: 1776
[🚨Error] error while getting docker repository name. Artifactory response: 403 Forbidden

Reproduction steps

$ create a Dockerfile
FROM nginx:1.21
RUN ls
$ docker build -t xx/docker-local/test:v1 .
$ jf c add xx
$ docker login xx
// create a policy to block download and block unscan artfiacts.
$ jf docker push xx/docker-local/test:v1
...
v1: digest: sha256:e7f0bc939c6fdc86f737267b3d52412401e82c711251f70a02a6bc5bab509477 size: 1776
[🚨Error] error while getting docker repository name. Artifactory response: 403 Forbidden

Expected behavior

jf docker push will return 200

JFrog CLI-Core version

latest

JFrog CLI version (if applicable)

2.52.6

Operating system type and version

Mac/Linux

JFrog Artifactory version

7.x

JFrog Xray version

3.x

`git-lfs-clean` deletes objects still being referenced

Describe the bug

Running jf git-lfs-clean removes objects which are still referenced from reachable git history.

When a git-LFS-stored file is committed to git and then removed in a subsequent commit and the commits are reachable from a git ref, the referenced git-LFS object must not be deleted.

Current behavior

When running jf git-lfs-clean, the historic file is cleaned from the repository leading to dangling git-LFS pointer(s) in the git repository, with git lfs fetch --all failing subsequently.

Reproduction steps

I use a script on our corporate network with the corporate service instance:

#!/usr/bin/env bash

set -o errexit
set -o nounset
set -o xtrace

# create "remote" git repo
mkdir --parents git-repo
(cd git-repo
        git init --bare
)

# create "local" git repo
mkdir --parents code
cd code
git init

# create a tracked local/remote branch with a root commit
git commit --allow-empty --message='Add empty commit'
git remote add origin ../git-repo
git push --set-upstream origin master

# set up git LFS
git lfs install --local
git config \
        --file .lfsconfig \
        --add lfs.url \
        'ssh://werner.rtf.siemens.net:1339/artifactory/hmi_fw_bootlair-release-lfs-egll'
git add .lfsconfig
git commit --message='Configure git LFS on JFrog Artifactory'

git lfs track '*.bin'
git add .gitattributes
git commit --message='Track binaries in git LFS'

# use a local branch which is temporary for us
git checkout -b removed-file

# add a file which is created and then removed
echo jfrog > 5.bin
git add 5.bin
git commit --message='Add 5.bin'

# push the file to both git and git LFS
git push --set-upstream origin removed-file

# remove the file locally
git rm 5.bin
git commit --message='Remove 5.bin'
git push

# delete the local branch
git checkout master
git branch -d removed-file

# get JFrog-CLI
curl \
        --location \
        --output jfrog-cli \
        'https://releases.jfrog.io/artifactory/jfrog-cli/v2-jf/\[RELEASE\]/jfrog-cli-linux-amd64/jf'
chmod +x jfrog-cli

export CI=true
export JFROG_CLI_LOG_LEVEL=DEBUG
export JFROG_CLI_REPORT_USAGE=false

./jfrog-cli rt git-lfs-clean \
        --refs 'refs/remotes/*,refs/tags/*' \
        --repo hmi_fw_bootlair-release-lfs-egll \
        --url ssh://werner.rtf.siemens.net:1339

rm --force --recursive .git/lfs/objects/*
git lfs fetch --all

The output:

❯ ./create-git-repo
+ mkdir --parents git-repo
+ cd git-repo
+ git init --bare
Initialized empty Git repository in /home/user/.local/tmp/git-lfs-issue/git-repo/
+ mkdir --parents code
+ cd code
+ git init
Initialized empty Git repository in /home/user/.local/tmp/git-lfs-issue/code/.git/
+ git commit --allow-empty '--message=Add empty commit'
[master (root-commit) 5d67c76] Add empty commit
+ git remote add origin ../git-repo
+ git push --set-upstream origin master
Enumerating objects: 2, done.
Counting objects: 100% (2/2), done.
Writing objects: 100% (2/2), 170 bytes | 170.00 KiB/s, done.
Total 2 (delta 0), reused 0 (delta 0), pack-reused 0
To ../git-repo
 * [new branch]      master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.
+ git lfs install --local
Updated Git hooks.
Git LFS initialized.
+ git config --file .lfsconfig --add lfs.url ssh://werner.rtf.siemens.net:1339/artifactory/hmi_fw_bootlair-release-lfs-egll
+ git add .lfsconfig
+ git commit '--message=Configure git LFS on JFrog Artifactory'
[master eceade3] Configure git LFS on JFrog Artifactory
 1 file changed, 2 insertions(+)
 create mode 100644 .lfsconfig
+ git lfs track '*.bin'
Tracking "*.bin"
+ git add .gitattributes
+ git commit '--message=Track binaries in git LFS'
[master 2faff37] Track binaries in git LFS
 1 file changed, 1 insertion(+)
 create mode 100644 .gitattributes
+ git checkout -b removed-file
Switched to a new branch 'removed-file'
+ echo jfrog
+ git add 5.bin
+ git commit '--message=Add 5.bin'
[removed-file 4918ea8] Add 5.bin
 1 file changed, 3 insertions(+)
 create mode 100644 5.bin
+ git push --set-upstream origin removed-file
Uploading LFS objects: 100% (1/1), 6 B | 0 B/s, done.
Enumerating objects: 10, done.
Counting objects: 100% (10/10), done.
Delta compression using up to 12 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (9/9), 977 bytes | 325.00 KiB/s, done.
Total 9 (delta 1), reused 0 (delta 0), pack-reused 0
To ../git-repo
 * [new branch]      removed-file -> removed-file
Branch 'removed-file' set up to track remote branch 'removed-file' from 'origin'.
+ git rm 5.bin
rm '5.bin'
+ git commit '--message=Remove 5.bin'
[removed-file 76bc443] Remove 5.bin
 1 file changed, 3 deletions(-)
 delete mode 100644 5.bin
+ git push
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Delta compression using up to 12 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 222 bytes | 222.00 KiB/s, done.
Total 2 (delta 1), reused 0 (delta 0), pack-reused 0
To ../git-repo
   4918ea8..76bc443  removed-file -> removed-file
+ git checkout master
Switched to branch 'master'
Your branch is ahead of 'origin/master' by 2 commits.
  (use "git push" to publish your local commits)
+ git branch -d removed-file
warning: deleting branch 'removed-file' that has been merged to
         'refs/remotes/origin/removed-file', but not yet merged to HEAD.
Deleted branch removed-file (was 76bc443).
+ curl --location --output jfrog-cli 'https://releases.jfrog.io/artifactory/jfrog-cli/v2-jf/\[RELEASE\]/jfrog-cli-linux-amd64/jf'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 23.2M  100 23.2M    0     0  16.7M      0  0:00:01  0:00:01 --:--:-- 78.1M
+ chmod +x jfrog-cli
+ export CI=true
+ CI=true
+ export JFROG_CLI_LOG_LEVEL=DEBUG
+ JFROG_CLI_LOG_LEVEL=DEBUG
+ export JFROG_CLI_REPORT_USAGE=false
+ JFROG_CLI_REPORT_USAGE=false
+ ./jfrog-cli rt git-lfs-clean --refs 'refs/remotes/*,refs/tags/*' --repo hmi_fw_bootlair-release-lfs-egll --url ssh://werner.rtf.siemens.net:1339
09:35:42 [Debug] JFrog CLI version: 2.28.1
09:35:42 [Debug] OS/Arch: linux/amd64
09:35:42 [Debug] Locking config file to run config Clear command.
09:35:42 [Debug] Creating lock in:  /home/user/.jfrog/locks/config
09:35:42 [Debug] Releasing lock:  /home/user/.jfrog/locks/config/jfrog-cli.conf.lck.2082515.1666683342835698050
09:35:42 [Debug] Config Clear command completed successfully. config file is released.
09:35:42 [Debug] Performing SSH authentication...
09:35:42 [Debug] Trying to authenticate via SSH-Agent...
09:35:42 [Debug] Usage info is disabled.
09:35:43 [Debug] SSH authentication successful.
09:35:43 [��Info] Searching files from Artifactory repository hmi_fw_bootlair-release-lfs-egll ...
09:35:43 [Debug] Searching Artifactory using AQL query:
 items.find({"$or":[{"$and":[{"repo":"hmi_fw_bootlair-release-lfs-egll","path":{"$match":"*"},"name":{"$match":"*"}}]}]}).include("name","repo","path","actual_md5","actual_sha1","sha256","size","type","modified","created")
09:35:43 [Debug] Sending HTTP POST request to: https://werner.rtf.siemens.net/artifactory/api/search/aql
09:35:43 [Debug] Artifactory response:  200 OK
09:35:43 [Debug] Streaming data to file...
09:35:43 [Debug] Finished streaming data successfully.
09:35:43 [��Info] Collecting files to preserve from Git references matching the pattern refs/remotes/*,refs/tags/* ...
09:35:43 [Debug] Opened Git repo at /home/user/.local/tmp/git-lfs-issue/code for reading
09:35:43 [Debug] Checking ref refs/heads/master
09:35:43 [Debug] Checking ref refs/remotes/origin/master
09:35:43 [Debug] Checking ref refs/remotes/origin/removed-file
09:35:43 [��Info] Found 0 files to keep, and 1 to clean
09:35:43 [��Info] Deleting 1 files from hmi_fw_bootlair-release-lfs-egll ...
09:35:43 [Debug] Performing SSH authentication...
09:35:43 [Debug] Trying to authenticate via SSH-Agent...
09:35:43 [Debug] SSH authentication successful.
09:35:43 [��Info] [Thread 2] Deleting hmi_fw_bootlair-release-lfs-egll/objects/15/1c/151c50753fceb8d2a8a6943c1cb382ff185fb91e88e5bc3f31f00ddf9f82c153
09:35:43 [Debug] Sending HTTP DELETE request to: https://werner.rtf.siemens.net/artifactory/hmi_fw_bootlair-release-lfs-egll/objects/15/1c/151c50753fceb8d2a8a6943c1cb382ff185fb91e88e5bc3f31f00ddf9f82c153
09:35:43 [Debug] Deleted 1 artifacts.
+ rm --force --recursive .git/lfs/objects/15
+ git lfs fetch --all
fetch: 1 object found, done.
fetch: Fetching all references...
[151c50753fceb8d2a8a6943c1cb382ff185fb91e88e5bc3f31f00ddf9f82c153] Object does not exist: [404] Object does not exist
error: failed to fetch some objects from 'https://werner.rtf.siemens.net/artifactory/hmi_fw_bootlair-release-lfs-egll'

Expected behavior

Historic objects still referenced are kept during cleanup. In the example above, the object is kept.

JFrog CLI-Core version

2.23.1

JFrog CLI version (if applicable)

2.28.1

Operating system type and version

Debian Linux 11

JFrog Artifactory version

No response

JFrog Xray version

No response

sarif output appears to be duplicated and invalid

Describe the bug

Output from jf build number --vuln=true --fail=true --server-id "server" --format sarif does not validate when uploaded to https://sarifweb.azurewebsites.net/Validation

The actual output is duplicated and contains invalid elements.

Current behavior

This is the output from the above command.

{
  "version": "2.1.0",
  "$schema": "https://json.schemastore.org/sarif-2.1.0-rtm.5.json",
  "runs": [
    {
      "tool": {
        "driver": {
          "informationUri": "https://jfrog.com/xray/",
          "name": "JFrog Xray",
          "rules": [
            {
              "id": "XRAY-260082",
              "shortDescription": null,
              "help": {
                "markdown": ".NET and Visual Studio Denial of Service Vulnerability. This CVE ID is unique from CVE-2022-23267, CVE-2022-29145."
              },
              "properties": {
                "security-severity": "7.5"
              }
            }
          ]
        }
      },
      "results": [
        {
          "ruleId": "XRAY-260082",
          "ruleIndex": 0,
          "message": {
            "text": "[CVE-2022-29117] Upgrade microsoft.owin:4.2.0 to [4.2.2]"
          },
          "locations": [
            {
              "physicalLocation": {
                "artifactLocation": {
                  "uri": " Package Descriptor"
                }
              }
            }
          ]
        },
        {
          "ruleId": "XRAY-260082",
          "ruleIndex": 0,
          "message": {
            "text": "[CVE-2022-29117] Upgrade microsoft.owin:4.2.0 to [4.2.2]"
          },
          "locations": [
            {
              "physicalLocation": {
                "artifactLocation": {
                  "uri": " Package Descriptor"
                }
              }
            }
          ]
        }
      ]
    }
  ]
}
{
  "version": "2.1.0",
  "$schema": "https://json.schemastore.org/sarif-2.1.0-rtm.5.json",
  "runs": [
    {
      "tool": {
        "driver": {
          "informationUri": "https://jfrog.com/xray/",
          "name": "JFrog Xray",
          "rules": [
            {
              "id": "XRAY-260082",
              "shortDescription": null,
              "help": {
                "markdown": ".NET and Visual Studio Denial of Service Vulnerability. This CVE ID is unique from CVE-2022-23267, CVE-2022-29145."
              },
              "properties": {
                "security-severity": "7.5"
              }
            },
            {
              "id": "XRAY-138885",
              "shortDescription": null,
              "help": {
                "markdown": "Newtonsoft Json.NET (Newtonsoft.Json) JSON Deserialization Nested Object Recursion Handling Stack Exhaustion DoS Weakness"
              },
              "properties": {
                "security-severity": "0.0"
              }
            }
          ]
        }
      },
      "results": [
        {
          "ruleId": "XRAY-260082",
          "ruleIndex": 0,
          "message": {
            "text": "[CVE-2022-29117] Upgrade microsoft.owin:4.2.0 to [4.2.2]"
          },
          "locations": [
            {
              "physicalLocation": {
                "artifactLocation": {
                  "uri": " Package Descriptor"
                }
              }
            }
          ]
        },
        {
          "ruleId": "XRAY-138885",
          "ruleIndex": 1,
          "message": {
            "text": "[XRAY-138885] Upgrade newtonsoft.json:12.0.1 to [13.0.1]"
          },
          "locations": [
            {
              "physicalLocation": {
                "artifactLocation": {
                  "uri": " Package Descriptor"
                }
              }
            }
          ]
        }
      ]
    }
  ]
}

No, I didn't paste twice - the output seems to be duplicated (such that it is not even valid JSON!

Removing the duplication, it also contains errors according to the online validator.

  • runs[0].tool.driver.rules[0].shortDescription: The schema requires one of the types [Object], but a token of type 'String' was found
  • runs[0].tool.driver.rules[0].help: The required property 'text' is missing.
  • runs[0].tool.driver: The tool 'JFrog Xray' does not provide any of the version-related properties 'version', 'semanticVersion', 'dottedQuadFileVersion'. Providing version information enables the log file consumer to determine whether the file was produced by an up to date version, and to avoid accidentally comparing log files produced by different tool versions
  • runs[0].results[0].locations[0].physicalLocation.artifactLocation.uri: The string ' Package Descriptor' is not a valid URI reference. URIs must conform to RFC 3986.
  • runs[0].results[1].locations[0].physicalLocation.artifactLocation.uri: The string ' Package Descriptor' is not a valid URI reference. URIs must conform to RFC 3986.

Reproduction steps

Ran command in description against a .NET project with a vulnerable NuGet package.

Expected behavior

Valid sarif outputted

JFrog CLI-Core version

2.34.6

JFrog CLI version (if applicable)

2.34.6

Operating system type and version

Windows 2019

JFrog Artifactory version

Current hosted version

JFrog Xray version

Current hosted version

Why is this library so poorly documented?

Not having documentation for this library makes it extremely hard to use when trying to develop CLI plugins since I don't know what kind of functionality there even is to use. This means that I either spend an extreme amount of time trying to figure out this library or I will spend that time re-implementing code that might exist here already. Saying Feel free to explore the jfrog-cli-core code base, and use it as part of your plugin. in the CLI plugin developer guide just doesn't cut it.

So, why is there so little documentation for this module when it is meant to be reused in CLI plugins or other CLI programs that interact with Jfrog products?

Transfer files panic in some cases when changing the number of working threads

Describe the bug
Panic occurs in some cases when running with 1 working thread and then changing to 1024

To Reproduce
Run transfer files with 1 working thread. After a while change the number of working threads to 1024.
The panic occurs at the end of phase 1 and at the start of phase 2.

Screenshots
image

Versions

  • JFrog CLI core version: 2.20.3
  • JFrog CLI version (if applicable): 2.25.0

jfrog audit scan command fails while finding existing packages

Hello Team,

We are using jfrog audit scan using cli. It fails while running scan for a package which is a dependency of a dependency and is available in Artifactory. It seems it is not able to find that package. Could you please guide me on this.

Details:

16:53:38 [Info] Running SCA scan for yarn vulnerable dependencies in /azp/_work/1/s/CheckedOutSource directory...
16:53:38 [Info] Calculating Yarn dependencies...
16:53:39 [Warn] An error occurred while collecting dependencies info:
{"type":"warning","data":"Lockfile has incorrect entry for \"axios@^0.26.1\". Ignoring it."}
{"type":"error","data":"Couldn't find package \"axios@^0.26.1\" required by \"@nn-sls/core@^2.2.2\" on the \"npm\" registry."}

16:53:39 [Warn] An error was thrown while collecting dependencies info: exit status 1
Command output:
{"type":"info","data":"Visit https://yarnpkg.com/en/docs/cli/list for documentation about this command."}

Package in our Artifactory:

image

big multiproject gradle builds die in ~30 minutes

Describe the bug

With 2.49.2 with multiproject gradle builds we get ~30 minutes (probable) timeouts where we had no problem and ~2 minute check times with 2.48.0. After correctly listing all dependencies we get this log:

09:29:28 [Info] Scanning 536 gradle dependencies...
09:29:28 [Debug] Sending HTTP POST request to: https://artifacts.chemaxon.com/xray/api/v1/scan/graph?watch=audit&scan_type=dependency
09:29:28 [Info] Waiting for scan to complete on JFrog Xray...
09:29:28 [Debug] Sending HTTP GET request to: https://artifacts.chemaxon.com/xray/api/v1/scan/graph/c0b5f901-c230-45e4-4840-074de915bb0b
09:29:28 [Debug] Get Dependencies Scan results... (Attempt 1)
09:29:33 [Debug] Sending HTTP GET request to: https://artifacts.chemaxon.com/xray/api/v1/scan/graph/c0b5f901-c230-45e4-4840-074de915bb0b
The ‘jf audit’ command also supports JFrog Advanced Security features, such as 'Contextual Analysis', 'Secret Detection', 'IaC Scan' and ‘SAST’.
This feature isn't enabled on your system. Read more - https://jfrog.com/xray/
/usr/bin/bash: line 309:    47 Killed                  jf audit --watches ${JFROG_WATCHES_AUDIT} --extended-table --exclude-test-deps

the second attempt dies after ~30 minutes every time.

Current behavior

CLI can not finish audit

Reproduction steps

No response

Expected behavior

No response

JFrog CLI-Core version

2.49.2

JFrog CLI version (if applicable)

No response

Operating system type and version

Ubuntu 22.04

JFrog Artifactory version

No response

JFrog Xray version

No response

Maven version validation fails when ANSI control sequences are present in the version output

Describe the bug

jfrog-cli-core v2.5.1 introduced validation of the version of Maven that is in use, which is being done by parsing the output of mvn --version. This validation fails if the version line, which is found by searching for a line in the output that starts with "Apache Maven", contains extra characters at the beginning of the line.

The version of Maven that I am using prints the Maven version line in bold, with ANSI control characters at the beginning (and end) of the line.

There is also a related, but secondary, issue where "minSupportedMvnVersion" is not printed correctly (see below).

Example:

user@localhost:~> jfrog config add --interactive=false --url=http://example.com --access-token=x
user@localhost:~> jfrog mvn-config --repo-resolve-releases=x --repo-resolve-snapshots=x --repo-deploy-releases=x --repo-deploy-snapshots=x
[Info] maven build config successfully created.
user@localhost:~> jfrog mvn clean
[Info] Running Mvn...
[Info] Could not get maven version, by running 'mvn --version' command. JFrog CLI mvn commands requires Maven version "+minSupportedMvnVersion+" or higher.
[Error] JFrog CLI mvn commands requires Maven version "+minSupportedMvnVersion+" or higher. The Current version is: 
user@localhost:~> mvn --version
Apache Maven 3.6.3 (SUSE 3.6.3-4.2.1)
Maven home: /usr/share/maven
Java version: 11.0.13, vendor: Oracle Corporation, runtime: /usr/lib64/jvm/java-11-openjdk-11
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "5.3.18-59.34-default", arch: "amd64", family: "unix"

mvn --version output with control characters visible:

^[[1mApache Maven 3.6.3 (SUSE 3.6.3-4.2.1)^[[m
Maven home: /usr/share/maven
Java version: 11.0.13, vendor: Oracle Corporation, runtime: /usr/lib64/jvm/java-11-openjdk-11
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "5.3.18-59.34-default", arch: "amd64", family: "unix"

My workaround was to downgrade to JFrog CLI 2.6.1 (which uses jfrog-cli-core 2.5.0).

To Reproduce

Launch openSUSE Leap 15.3 using this image: https://download.opensuse.org/distribution/leap/15.3/appliances/openSUSE-Leap-15.3-JeOS.x86_64-15.3-OpenStack-Cloud-Build9.258.qcow2

# Install Maven
zypper install maven

# Add the JFrog CLI RPM repository
cat > jfrog-cli.repo <<EOF
[jfrog-cli]
name=jfrog-cli
baseurl=https://releases.jfrog.io/artifactory/jfrog-rpms
enabled=1
gpgcheck=0
EOF

zypper addrepo ./jfrog-cli.repo
zypper install jfrog-cli-v2

# Configure
jfrog config add --interactive=false --url=http://example.com --access-token=x
jfrog mvn-config --repo-resolve-releases=x --repo-resolve-snapshots=x --repo-deploy-releases=x --repo-deploy-snapshots=x

# Run the Maven command to observe the error
jfrog mvn clean


# (optional) Downgrade to JFrog CLI 2.6.1
zypper install --oldpackage jfrog-cli-v2=2.6.1-1

# Run the Maven command again, which will now progress
jfrog mvn clean

Expected behavior

JFrog CLI should recognize that I am using Maven version 3.1.0 or above.

Versions

  • JFrog CLI core version: 2.5.1
  • JFrog CLI version (if applicable): 2.6.2
  • Artifactory version: 7.27.10
  • Maven version: Apache Maven 3.6.3 (SUSE 3.6.3-4.2.1)

[transfer-Config] change users' realm when migrating to JFrog SaaS

Is your feature request related to a problem? Please describe.
When migrating to JFrog SaaS, I'd like the possibility to switch to a new authentication mechanism, for example from LDAP (Self Hosted instance) to SAML SSO (SaaS instance).
For the moment, the Artifactory/Access API don't allow to change a user's realm.

Describe the solution you'd like to see
As the command triggers an Artifactory system export which includes the access export file, the code could modify that file as it contains all the users description

    "username" : "mary",
    "firstName" : null,
    "lastName" : null,
    "email" : "[email protected]",
    "realm" : "ldap",
    "status" : "enabled",

new params could be added such as --src-realm=ldap and --target-realm=saml which will replace any occurence of ldap by saml

Describe alternatives you've considered
the JFrog official recommendation is to ask each "ldap" user to log on the JFrog Platform via SAML which will either recreate their user in the target realm (like SAML) if the username is different from the initial one (ldap) or override the realm of the user if the login is the same for ldap and saml.

Additional context
NA

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.