Git Product home page Git Product logo

grype's Introduction

Grype logo

Static Analysis + Unit + Integration Validations Go Report Card GitHub release GitHub go.mod Go version License: Apache-2.0 Slack Invite OpenSSF Scorecard OpenSSF Best Practices

A vulnerability scanner for container images and filesystems. Easily install the binary to try it out. Works with Syft, the powerful SBOM (software bill of materials) tool for container images and filesystems.

Join our community meetings!

For commercial support options with Syft or Grype, please contact Anchore

grype-demo

Features

  • Scan the contents of a container image or filesystem to find known vulnerabilities.
  • Find vulnerabilities for major operating system packages:
    • Alpine
    • Amazon Linux
    • BusyBox
    • CentOS
    • CBL-Mariner
    • Debian
    • Distroless
    • Oracle Linux
    • Red Hat (RHEL)
    • Ubuntu
  • Find vulnerabilities for language-specific packages:
    • Ruby (Gems)
    • Java (JAR, WAR, EAR, JPI, HPI)
    • JavaScript (NPM, Yarn)
    • Python (Egg, Wheel, Poetry, requirements.txt/setup.py files)
    • Dotnet (deps.json)
    • Golang (go.mod)
    • PHP (Composer)
    • Rust (Cargo)
  • Supports Docker, OCI and Singularity image formats.
  • OpenVEX support for filtering and augmenting scanning results.

If you encounter an issue, please let us know using the issue tracker.

Installation

Recommended

curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin

You can also choose another destination directory and release version for the installation. The destination directory doesn't need to be /usr/local/bin, it just needs to be a location found in the user's PATH and writable by the user that's installing Grype.

curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b <DESTINATION_DIR> <RELEASE_VERSION>

Chocolatey

The chocolatey distribution of grype is community maintained and not distributed by the anchore team

choco install grype -y

Homebrew

brew tap anchore/grype
brew install grype

MacPorts

On macOS, Grype can additionally be installed from the community maintained port via MacPorts:

sudo port install grype

Note: Currently, Grype is built only for macOS and Linux.

From source

See DEVELOPING.md for instructions to build and run from source.

GitHub Actions

If you're using GitHub Actions, you can simply use our Grype-based action to run vulnerability scans on your code or container images during your CI workflows.

Verifying the artifacts

Checksums are applied to all artifacts, and the resulting checksum file is signed using cosign.

You need the following tool to verify signature:

Verification steps are as follow:

  1. Download the files you want, and the checksums.txt, checksums.txt.pem and checksums.txt.sig files from the releases page:

  2. Verify the signature:

cosign verify-blob <path to checksum.txt> \
--certificate <path to checksums.txt.pem> \
--signature <path to checksums.txt.sig> \
--certificate-identity-regexp 'https://github\.com/anchore/grype/\.github/workflows/.+' \
--certificate-oidc-issuer "https://token.actions.githubusercontent.com"
  1. Once the signature is confirmed as valid, you can proceed to validate that the SHA256 sums align with the downloaded artifact:
sha256sum --ignore-missing -c checksums.txt

Getting started

Install the binary, and make sure that grype is available in your path. To scan for vulnerabilities in an image:

grype <image>

The above command scans for vulnerabilities that are visible in the container (i.e., the squashed representation of the image). To include software from all image layers in the vulnerability scan, regardless of its presence in the final image, provide --scope all-layers:

grype <image> --scope all-layers

To run grype from a Docker container so it can scan a running container, use the following command:

docker run --rm \
--volume /var/run/docker.sock:/var/run/docker.sock \
--name Grype anchore/grype:latest \
$(ImageName):$(ImageTag)

Supported sources

Grype can scan a variety of sources beyond those found in Docker.

# scan a container image archive (from the result of `docker image save ...`, `podman save ...`, or `skopeo copy` commands)
grype path/to/image.tar

# scan a Singularity Image Format (SIF) container
grype path/to/image.sif

# scan a directory
grype dir:path/to/dir

Sources can be explicitly provided with a scheme:

podman:yourrepo/yourimage:tag          use images from the Podman daemon
docker:yourrepo/yourimage:tag          use images from the Docker daemon
docker-archive:path/to/yourimage.tar   use a tarball from disk for archives created from "docker save"
oci-archive:path/to/yourimage.tar      use a tarball from disk for OCI archives (from Skopeo or otherwise)
oci-dir:path/to/yourimage              read directly from a path on disk for OCI layout directories (from Skopeo or otherwise)
singularity:path/to/yourimage.sif      read directly from a Singularity Image Format (SIF) container on disk
dir:path/to/yourproject                read directly from a path on disk (any directory)
sbom:path/to/syft.json                 read Syft JSON from path on disk
registry:yourrepo/yourimage:tag        pull image directly from a registry (no container runtime required)

If an image source is not provided and cannot be detected from the given reference it is assumed the image should be pulled from the Docker daemon. If docker is not present, then the Podman daemon is attempted next, followed by reaching out directly to the image registry last.

This default behavior can be overridden with the default-image-pull-source configuration option (See Configuration for more details).

Use SBOMs for even faster vulnerability scanning in Grype:

# Then scan for new vulnerabilities as frequently as needed
grype sbom:./sbom.json

# (You can also pipe the SBOM into Grype)
cat ./sbom.json | grype

Grype supports input of Syft, SPDX, and CycloneDX SBOM formats. If Syft has generated any of these file types, they should have the appropriate information to work properly with Grype. It is also possible to use SBOMs generated by other tools with varying degrees of success. Two things that make Grype matching more successful are the inclusion of CPE and Linux distribution information. If an SBOM does not include any CPE information, it is possible to generate these based on package information using the --add-cpes-if-none flag. To specify a distribution, use the --distro <distro>:<version> flag. A full example is:

grype --add-cpes-if-none --distro alpine:3.10 sbom:some-alpine-3.10.spdx.json

Supported versions

Any version of Grype before v0.40.1 is not supported. Unsupported releases will not receive any software updates or vulnerability database updates. You can still build vulnerability databases for unsupported Grype releases by using previous releases of vunnel to gather the upstream data and grype-db to build databases for unsupported schemas.

Working with attestations

Grype supports scanning SBOMs as input via stdin. Users can use cosign to verify attestations with an SBOM as its content to scan an image for vulnerabilities:

COSIGN_EXPERIMENTAL=1 cosign verify-attestation caphill4/java-spdx-tools:latest \
| jq -r .payload \
| base64 --decode \
| jq -r .predicate.Data \
| grype

Vulnerability Summary

Basic Grype Vulnerability Data Shape

 {
  "vulnerability": {
    ...
  },
  "relatedVulnerabilities": [
    ...
  ],
  "matchDetails": [
    ...
  ],
  "artifact": {
    ...
  }
}
  • Vulnerability: All information on the specific vulnerability that was directly matched on (e.g. ID, severity, CVSS score, fix information, links for more information)
  • RelatedVulnerabilities: Information pertaining to vulnerabilities found to be related to the main reported vulnerability. Maybe the vulnerability we matched on was a GitHub Security Advisory, which has an upstream CVE (in the authoritative national vulnerability database). In these cases we list the upstream vulnerabilities here.
  • MatchDetails: This section tries to explain what we searched for while looking for a match and exactly what details on the package and vulnerability that lead to a match.
  • Artifact: This is a subset of the information that we know about the package (when compared to the Syft json output, we summarize the metadata section). This has information about where within the container image or directory we found the package, what kind of package it is, licensing info, pURLs, CPEs, etc.

Excluding file paths

Grype can exclude files and paths from being scanned within a source by using glob expressions with one or more --exclude parameters:

grype <source> --exclude './out/**/*.json' --exclude /etc

Note: in the case of image scanning, since the entire filesystem is scanned it is possible to use absolute paths like /etc or /usr/**/*.txt whereas directory scans exclude files relative to the specified directory. For example: scanning /usr/foo with --exclude ./package.json would exclude /usr/foo/package.json and --exclude '**/package.json' would exclude all package.json files under /usr/foo. For directory scans, it is required to begin path expressions with ./, */, or **/, all of which will be resolved relative to the specified scan directory. Keep in mind, your shell may attempt to expand wildcards, so put those parameters in single quotes, like: '**/*.json'.

External Sources

Grype can be configured to incorporate external data sources for added fidelity in vulnerability matching. This feature is currently disabled by default. To enable this feature add the following to the grype config:

external-sources:
  enable: true
  maven:
    search-upstream-by-sha1: true
    base-url: https://search.maven.org/solrsearch/select

You can also configure the base-url if you're using another registry as your maven endpoint.

Output formats

The output format for Grype is configurable as well:

grype <image> -o <format>

Where the formats available are:

  • table: A columnar summary (default).
  • cyclonedx: An XML report conforming to the CycloneDX 1.4 specification.
  • cyclonedx-json: A JSON report conforming to the CycloneDX 1.4 specification.
  • json: Use this to get as much information out of Grype as possible!
  • sarif: Use this option to get a SARIF report (Static Analysis Results Interchange Format)
  • template: Lets the user specify the output format. See "Using templates" below.

Using templates

Grype lets you define custom output formats, using Go templates. Here's how it works:

  • Define your format as a Go template, and save this template as a file.

  • Set the output format to "template" (-o template).

  • Specify the path to the template file (-t ./path/to/custom.template).

  • Grype's template processing uses the same data models as the json output format — so if you're wondering what data is available as you author a template, you can use the output from grype <image> -o json as a reference.

Please note: Templates can access information about the system they are running on, such as environment variables. You should never run untrusted templates.

There are several example templates in the templates directory in the Grype source which can serve as a starting point for a custom output format. For example, csv.tmpl produces a vulnerability report in CSV (comma separated value) format:

"Package","Version Installed","Vulnerability ID","Severity"
"coreutils","8.30-3ubuntu2","CVE-2016-2781","Low"
"libc-bin","2.31-0ubuntu9","CVE-2016-10228","Negligible"
"libc-bin","2.31-0ubuntu9","CVE-2020-6096","Low"
...

You can also find the template for the default "table" output format in the same place.

Grype also includes a vast array of utility templating functions from sprig apart from the default golang text/template to allow users to customize the output from Grype.

Gating on severity of vulnerabilities

You can have Grype exit with an error if any vulnerabilities are reported at or above the specified severity level. This comes in handy when using Grype within a script or CI pipeline. To do this, use the --fail-on <severity> CLI flag.

For example, here's how you could trigger a CI pipeline failure if any vulnerabilities are found in the ubuntu:latest image with a severity of "medium" or higher:

grype ubuntu:latest --fail-on medium

Specifying matches to ignore

If you're seeing Grype report false positives or any other vulnerability matches that you just don't want to see, you can tell Grype to ignore matches by specifying one or more "ignore rules" in your Grype configuration file (e.g. ~/.grype.yaml). This causes Grype not to report any vulnerability matches that meet the criteria specified by any of your ignore rules.

Each rule can specify any combination of the following criteria:

  • vulnerability ID (e.g. "CVE-2008-4318")
  • namespace (e.g. "nvd")
  • fix state (allowed values: "fixed", "not-fixed", "wont-fix", or "unknown")
  • package name (e.g. "libcurl")
  • package version (e.g. "1.5.1")
  • package language (e.g. "python"; these values are defined here)
  • package type (e.g. "npm"; these values are defined here)
  • package location (e.g. "/usr/local/lib/node_modules/**"; supports glob patterns)

Here's an example ~/.grype.yaml that demonstrates the expected format for ignore rules:

ignore:
  # This is the full set of supported rule fields:
  - vulnerability: CVE-2008-4318
    fix-state: unknown
    # VEX fields apply when Grype reads vex data:
    vex-status: not_affected
    vex-justification: vulnerable_code_not_present
    package:
      name: libcurl
      version: 1.5.1
      type: npm
      location: "/usr/local/lib/node_modules/**"

  # We can make rules to match just by vulnerability ID:
  - vulnerability: CVE-2014-54321

  # ...or just by a single package field:
  - package:
      type: gem

Vulnerability matches will be ignored if any rules apply to the match. A rule is considered to apply to a given vulnerability match only if all fields specified in the rule apply to the vulnerability match.

When you run Grype while specifying ignore rules, the following happens to the vulnerability matches that are "ignored":

  • Ignored matches are completely hidden from Grype's output, except for when using the json or template output formats; however, in these two formats, the ignored matches are removed from the existing matches array field, and they are placed in a new ignoredMatches array field. Each listed ignored match also has an additional field, appliedIgnoreRules, which is an array of any rules that caused Grype to ignore this vulnerability match.

  • Ignored matches do not factor into Grype's exit status decision when using --fail-on <severity>. For instance, if a user specifies --fail-on critical, and all of the vulnerability matches found with a "critical" severity have been ignored, Grype will exit zero.

Note: Please continue to report any false positives you see! Even if you can reliably filter out false positives using ignore rules, it's very helpful to the Grype community if we have as much knowledge about Grype's false positives as possible. This helps us continuously improve Grype!

Showing only "fixed" vulnerabilities

If you only want Grype to report vulnerabilities that have a confirmed fix, you can use the --only-fixed flag. (This automatically adds ignore rules into Grype's configuration, such that vulnerabilities that aren't fixed will be ignored.)

For example, here's a scan of Alpine 3.10:

NAME          INSTALLED  FIXED-IN   VULNERABILITY   SEVERITY
apk-tools     2.10.6-r0  2.10.7-r0  CVE-2021-36159  Critical
libcrypto1.1  1.1.1k-r0             CVE-2021-3711   Critical
libcrypto1.1  1.1.1k-r0             CVE-2021-3712   High
libssl1.1     1.1.1k-r0             CVE-2021-3712   High
libssl1.1     1.1.1k-r0             CVE-2021-3711   Critical

...and here's the same scan, but adding the flag --only-fixed:

NAME       INSTALLED  FIXED-IN   VULNERABILITY   SEVERITY
apk-tools  2.10.6-r0  2.10.7-r0  CVE-2021-36159  Critical

If you want Grype to only report vulnerabilities that do not have a confirmed fix, you can use the --only-notfixed flag. Alternatively, you can use the --ignore-states flag to filter results for vulnerabilities with specific states such as wont-fix (see --help for a list of valid fix states). These flags automatically add ignore rules into Grype's configuration, such that vulnerabilities which are fixed, or will not be fixed, will be ignored.

VEX Support

Grype can use VEX (Vulnerability Exploitability Exchange) data to filter false positives or provide additional context, augmenting matches. When scanning a container image, you can use the --vex flag to point to one or more OpenVEX documents.

VEX statements relate a product (a container image), a vulnerability, and a VEX status to express an assertion of the vulnerability's impact. There are four VEX statuses: not_affected, affected, fixed and under_investigation.

Here is an example of a simple OpenVEX document. (tip: use vexctl to generate your own documents).

{
  "@context": "https://openvex.dev/ns/v0.2.0",
  "@id": "https://openvex.dev/docs/public/vex-d4e9020b6d0d26f131d535e055902dd6ccf3e2088bce3079a8cd3588a4b14c78",
  "author": "A Grype User <[email protected]>",
  "timestamp": "2023-07-17T18:28:47.696004345-06:00",
  "version": 1,
  "statements": [
    {
      "vulnerability": {
        "name": "CVE-2023-1255"
      },
      "products": [
        {
          "@id": "pkg:oci/alpine@sha256%3A124c7d2707904eea7431fffe91522a01e5a861a624ee31d03372cc1d138a3126",
          "subcomponents": [
            { "@id": "pkg:apk/alpine/[email protected]" },
            { "@id": "pkg:apk/alpine/[email protected]" }
          ]
        }
      ],
      "status": "fixed"
    }
  ]
}

By default, Grype will use any statements in specified VEX documents with a status of not_affected or fixed to move matches to the ignore set.

Any matches ignored as a result of VEX statements are flagged when using --show-suppressed:

libcrypto3  3.0.8-r3   3.0.8-r4   apk   CVE-2023-1255  Medium (suppressed by VEX)  

Statements with an affected or under_investigation status will only be considered to augment the result set when specifically requested using the GRYPE_VEX_ADD environment variable or in a configuration file.

VEX Ignore Rules

Ignore rules can be written to control how Grype honors VEX statements. For example, to configure Grype to only act on VEX statements when the justification is vulnerable_code_not_present, you can write a rule like this:

---
ignore:
  - vex-status: not_affected
    vex-justification: vulnerable_code_not_present

See the list of justifications for details. You can mix vex-status and vex-justification with other ignore rule parameters.

Grype's database

When Grype performs a scan for vulnerabilities, it does so using a vulnerability database that's stored on your local filesystem, which is constructed by pulling data from a variety of publicly available vulnerability data sources. These sources include:

By default, Grype automatically manages this database for you. Grype checks for new updates to the vulnerability database to make sure that every scan uses up-to-date vulnerability information. This behavior is configurable. For more information, see the Managing Grype's database section.

How database updates work

Grype's vulnerability database is a SQLite file, named vulnerability.db. Updates to the database are atomic: the entire database is replaced and then treated as "readonly" by Grype.

Grype's first step in a database update is discovering databases that are available for retrieval. Grype does this by requesting a "listing file" from a public endpoint:

https://toolbox-data.anchore.io/grype/databases/listing.json

The listing file contains entries for every database that's available for download.

Here's an example of an entry in the listing file:

{
  "built": "2021-10-21T08:13:41Z",
  "version": 3,
  "url": "https://toolbox-data.anchore.io/grype/databases/vulnerability-db_v3_2021-10-21T08:13:41Z.tar.gz",
  "checksum": "sha256:8c99fb4e516f10b304f026267c2a73a474e2df878a59bf688cfb0f094bfe7a91"
}

With this information, Grype can select the correct database (the most recently built database with the current schema version), download the database, and verify the database's integrity using the listed checksum value.

Managing Grype's database

Note: During normal usage, there is no need for users to manage Grype's database! Grype manages its database behind the scenes. However, for users that need more control, Grype provides options to manage the database more explicitly.

Local database cache directory

By default, the database is cached on the local filesystem in the directory $XDG_CACHE_HOME/grype/db/<SCHEMA-VERSION>/. For example, on macOS, the database would be stored in ~/Library/Caches/grype/db/3/. (For more information on XDG paths, refer to the XDG Base Directory Specification.)

You can set the cache directory path using the environment variable GRYPE_DB_CACHE_DIR.

Data staleness

Grype needs up-to-date vulnerability information to provide accurate matches. By default, it will fail execution if the local database was not built in the last 5 days. The data staleness check is configurable via the environment variable GRYPE_DB_MAX_ALLOWED_BUILT_AGE and GRYPE_DB_VALIDATE_AGE or the field max-allowed-built-age and validate-age, under db. It uses golang's time duration syntax. Set GRYPE_DB_VALIDATE_AGE or validate-age to false to disable staleness check.

Offline and air-gapped environments

By default, Grype checks for a new database on every run, by making a network call over the Internet. You can tell Grype not to perform this check by setting the environment variable GRYPE_DB_AUTO_UPDATE to false.

As long as you place Grype's vulnerability.db and metadata.json files in the cache directory for the expected schema version, Grype has no need to access the network. Additionally, you can get a listing of the database archives available for download from the grype db list command in an online environment, download the database archive, transfer it to your offline environment, and use grype db import <db-archive-path> to use the given database in an offline capacity.

If you would like to distribute your own Grype databases internally without needing to use db import manually you can leverage Grype's DB update mechanism. To do this you can craft your own listing.json file similar to the one found publically (see grype db list -o raw for an example of our public listing.json file) and change the download URL to point to an internal endpoint (e.g. a private S3 bucket, an internal file server, etc). Any internal installation of Grype can receive database updates automatically by configuring the db.update-url (same as the GRYPE_DB_UPDATE_URL environment variable) to point to the hosted listing.json file you've crafted.

CLI commands for database management

Grype provides database-specific CLI commands for users that want to control the database from the command line. Here are some of the useful commands provided:

grype db status — report the current status of Grype's database (such as its location, build date, and checksum)

grype db check — see if updates are available for the database

grype db update — ensure the latest database has been downloaded to the cache directory (Grype performs this operation at the beginning of every scan by default)

grype db list — download the listing file configured at db.update-url and show databases that are available for download

grype db import — provide grype with a database archive to explicitly use (useful for offline DB updates)

Find complete information on Grype's database commands by running grype db --help.

Shell completion

Grype supplies shell completion through its CLI implementation (cobra). Generate the completion code for your shell by running one of the following commands:

  • grype completion <bash|zsh|fish>
  • go run ./cmd/grype completion <bash|zsh|fish>

This will output a shell script to STDOUT, which can then be used as a completion script for Grype. Running one of the above commands with the -h or --help flags will provide instructions on how to do that for your chosen shell.

Private Registry Authentication

Local Docker Credentials

When a container runtime is not present, grype can still utilize credentials configured in common credential sources (such as ~/.docker/config.json). It will pull images from private registries using these credentials. The config file is where your credentials are stored when authenticating with private registries via some command like docker login. For more information see the go-containerregistry documentation.

An example config.json looks something like this:

// config.json
{
  "auths": {
    "registry.example.com": {
      "username": "AzureDiamond",
      "password": "hunter2"
    }
  }
}

You can run the following command as an example. It details the mount/environment configuration a container needs to access a private registry:

docker run -v ./config.json:/config/config.json -e "DOCKER_CONFIG=/config" anchore/grype:latest <private_image>

Docker Credentials in Kubernetes

The below section shows a simple workflow on how to mount this config file as a secret into a container on kubernetes.

  1. Create a secret. The value of config.json is important. It refers to the specification detailed here. Below this section is the secret.yaml file that the pod configuration will consume as a volume. The key config.json is important. It will end up being the name of the file when mounted into the pod.
    
        apiVersion: v1
        kind: Secret
        metadata:
          name: registry-config
          namespace: grype
        data:
          config.json: <base64 encoded config.json>
        ```
    
        `kubectl apply -f secret.yaml`
    
    
  2. Create your pod running grype. The env DOCKER_CONFIG is important because it advertises where to look for the credential file. In the below example, setting DOCKER_CONFIG=/config informs grype that credentials can be found at /config/config.json. This is why we used config.json as the key for our secret. When mounted into containers the secrets' key is used as the filename. The volumeMounts section mounts our secret to /config. The volumes section names our volume and leverages the secret we created in step one.
    
        apiVersion: v1
        kind: Pod
        spec:
          containers:
            - image: anchore/grype:latest
              name: grype-private-registry-demo
              env:
                - name: DOCKER_CONFIG
                  value: /config
              volumeMounts:
              - mountPath: /config
                name: registry-config
                readOnly: true
              args:
                - <private_image>
          volumes:
          - name: registry-config
            secret:
              secretName: registry-config
        ```
    
        `kubectl apply -f pod.yaml`
    
    
  3. The user can now run kubectl logs grype-private-registry-demo. The logs should show the grype analysis for the <private_image> provided in the pod configuration.

Using the above information, users should be able to configure private registry access without having to do so in the grype or syft configuration files. They will also not be dependent on a docker daemon, (or some other runtime software) for registry configuration and access.

Configuration

Default configuration search paths:

  • .grype.yaml
  • .grype/config.yaml
  • ~/.grype.yaml
  • <XDG_CONFIG_HOME>/grype/config.yaml

You can also use the --config / -c flag to provide your own configuration file/path:

grype <image> -c /path/to/config.yaml

Configuration options (example values are the default):

# enable/disable checking for application updates on startup
# same as GRYPE_CHECK_FOR_APP_UPDATE env var
check-for-app-update: true

# allows users to specify which image source should be used to generate the sbom
# valid values are: registry, docker, podman
# same as GRYPE_DEFAULT_IMAGE_PULL_SOURCE env var
default-image-pull-source: ""

# same as --name; set the name of the target being analyzed
name: ""

# upon scanning, if a severity is found at or above the given severity then the return code will be 1
# default is unset which will skip this validation (options: negligible, low, medium, high, critical)
# same as --fail-on ; GRYPE_FAIL_ON_SEVERITY env var
fail-on-severity: ""

# the output format of the vulnerability report (options: table, json, cyclonedx)
# same as -o ; GRYPE_OUTPUT env var
output: "table"

# write output report to a file (default is to write to stdout)
# same as --file; GRYPE_FILE env var
file: ""

# a list of globs to exclude from scanning, for example:
# exclude:
#   - '/etc/**'
#   - './out/**/*.json'
# same as --exclude ; GRYPE_EXCLUDE env var
exclude: []

# include matches on kernel-headers packages that are matched against upstream kernel package
# if 'false' any such matches are marked as ignored
match-upstream-kernel-headers: false

# os and/or architecture to use when referencing container images (e.g. "windows/armv6" or "arm64")
# same as --platform; GRYPE_PLATFORM env var
platform: ""

# If using SBOM input, automatically generate CPEs when packages have none
add-cpes-if-none: false

# Explicitly specify a linux distribution to use as <distro>:<version> like alpine:3.10
distro:

external-sources:
  enable: false
  maven:
    search-upstream-by-sha1: true
    base-url: https://search.maven.org/solrsearch/select

db:
  # check for database updates on execution
  # same as GRYPE_DB_AUTO_UPDATE env var
  auto-update: true

  # location to write the vulnerability database cache
  # same as GRYPE_DB_CACHE_DIR env var
  cache-dir: "$XDG_CACHE_HOME/grype/db"

  # URL of the vulnerability database
  # same as GRYPE_DB_UPDATE_URL env var
  update-url: "https://toolbox-data.anchore.io/grype/databases/listing.json"

  # it ensures db build is no older than the max-allowed-built-age
  # set to false to disable check
  validate-age: true

  # Max allowed age for vulnerability database,
  # age being the time since it was built
  # Default max age is 120h (or five days)
  max-allowed-built-age: "120h"

search:
  # the search space to look for packages (options: all-layers, squashed)
  # same as -s ; GRYPE_SEARCH_SCOPE env var
  scope: "squashed"

  # search within archives that do contain a file index to search against (zip)
  # note: for now this only applies to the java package cataloger
  # same as GRYPE_PACKAGE_SEARCH_INDEXED_ARCHIVES env var
  indexed-archives: true

  # search within archives that do not contain a file index to search against (tar, tar.gz, tar.bz2, etc)
  # note: enabling this may result in a performance impact since all discovered compressed tars will be decompressed
  # note: for now this only applies to the java package cataloger
  # same as GRYPE_PACKAGE_SEARCH_UNINDEXED_ARCHIVES env var
  unindexed-archives: false

# options when pulling directly from a registry via the "registry:" scheme
registry:
  # skip TLS verification when communicating with the registry
  # same as GRYPE_REGISTRY_INSECURE_SKIP_TLS_VERIFY env var
  insecure-skip-tls-verify: false

  # use http instead of https when connecting to the registry
  # same as GRYPE_REGISTRY_INSECURE_USE_HTTP env var
  insecure-use-http: false

  # filepath to a CA certificate (or directory containing *.crt, *.cert, *.pem) used to generate the client certificate
  # GRYPE_REGISTRY_CA_CERT env var
  ca-cert: ""

  # credentials for specific registries
  auth:
    # the URL to the registry (e.g. "docker.io", "localhost:5000", etc.)
    # GRYPE_REGISTRY_AUTH_AUTHORITY env var
    - authority: ""

      # GRYPE_REGISTRY_AUTH_USERNAME env var
      username: ""

      # GRYPE_REGISTRY_AUTH_PASSWORD env var
      password: ""

      # note: token and username/password are mutually exclusive
      # GRYPE_REGISTRY_AUTH_TOKEN env var
      token: ""

      # filepath to the client certificate used for TLS authentication to the registry
      # GRYPE_REGISTRY_AUTH_TLS_CERT env var
      tls-cert: ""

      # filepath to the client key used for TLS authentication to the registry
      # GRYPE_REGISTRY_AUTH_TLS_KEY env var
      tls-key: ""

    # - ... # note, more credentials can be provided via config file only (not env vars)


log:
  # suppress all output (except for the vulnerability list)
  # same as -q ; GRYPE_LOG_QUIET env var
  quiet: false

  # increase verbosity
  # same as GRYPE_LOG_VERBOSITY env var
  verbosity: 0

  # the log level; note: detailed logging suppress the ETUI
  # same as GRYPE_LOG_LEVEL env var
  # Uses logrus logging levels: https://github.com/sirupsen/logrus#level-logging
  level: "error"

  # location to write the log file (default is not to have a log file)
  # same as GRYPE_LOG_FILE env var
  file: ""

match:
  # sets the matchers below to use cpes when trying to find 
  # vulnerability matches. The stock matcher is the default
  # when no primary matcher can be identified.
  java:
    using-cpes: false
  python:
    using-cpes: false
  javascript:
    using-cpes: false
  ruby:
    using-cpes: false
  dotnet:
    using-cpes: false
  golang:
    using-cpes: false
    # even if CPE matching is disabled, make an exception when scanning for "stdlib".
    always-use-cpe-for-stdlib: true
    allow-main-module-pseudo-version-comparison: true
  stock:
    using-cpes: true

Future plans

The following areas of potential development are currently being investigated:

  • Support for allowlist, package mapping

grype's People

Contributors

anchore-actions-token-generator[bot] avatar briankoe741 avatar cjnosal avatar cpendery avatar dakaneye avatar dependabot[bot] avatar desenna avatar developer-guy avatar devfbe avatar hn23 avatar jneate avatar jonasagx avatar joycebrum avatar kzantow avatar luhring avatar nwl avatar plavy avatar puerco avatar rossturk avatar samj1912 avatar seiyab avatar shanedell avatar spiffcs avatar testwill avatar tri-adam avatar vijay-p avatar wagoodman avatar westonsteimel avatar willmurphyscode avatar zhill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grype's Issues

Merge / Deduplicate findings

If the user select "AllLayers" scope the report will show duplicate entries that are reported by syft (say if you install a DEB in a layer, you'll have a duplicate DPKG status file, so most packages will be duplicated in the report).

For this reason, we should attempt to merge or deduplicate entries from the results. This could get solved by implementing anchore/syft#32 (depending on the approach)

Add APK matcher

Support the imgbom apk cataloger by allowing for matching of vulnerabilities from all possible sources.

logs: enable application logs to a file (always)

We currently have a mix of nice, human-readable output like the download and check process when the tool starts and application logging that comes mostly in the form of errors. For example:

[0000] ERROR failed to catalog: could not fetch image 'foobar': unable to trace image save progress: unable to inspect image: Error: No such image: index.docker.io/library/foobar:latest

One of the problems that will happen when users face issues/bugs is that they will probably not run with high verbosity, and we will end up asking them to re-run with added verbosity flags (e.g. -vvv). This will complicate the reporting of the issue because the run will most likely be long gone, and the output no longer available.

Since the file logging is not enabled by default, asking for the log artifact will also not be a possibility.

As a reporter, having to re-run while trying to reproduce, is cumbersome to say the least.

Grype should enable file logging by default, and for a while after releasing, the default should be DEBUG. This highly verbose level should lower to INFO or WARNING as the tool stabilizes.

Enabling the file logging by default would also mean to do a "best effort" in trying to find a suitable locations, because /var/log/grype/2020-08-04.log might not be available or with enough permissions

In addition to all of this, it would be really useful to start separating what user-facing output is, vs. what system logging entails. More developer-like logs go to the file, and user-friendly messages go to the terminal

update check

Check that a new version is available and inform the user on start up.

Note: this includes the infrastructure necessary to complete this task.

Note: this affects the release process.

Note: create a ticket that encapsulates what automation is needed to be done for hard release.

Distro matchers should be guided by package type not detected distro

Currently we use the detected distro to guide rpm, deb, and apk matchers to find vulnerabilities. This is functional, however, it would be more accurate to use the package type (rpm, deb, apk) to select the vulnerability namespace and not the distro detected (redhat:8, ubuntu:20, alpine:3:12).

Problem: we don't know the distro version from the package type, so it is not possible to select the "correct" vulnerability namespace. This is worth thinking about nonetheless.

Add first dpkg version support

In order to complete the first matcher:

  • there should be a generic Version type which acts as a facade for the numerous version types that will eventually be implemented.
  • there should additionally be a generic Constraint interface which can be used to determine if a given version object satisfies the constraint.

In order to complete dpkg vulnerability matching specifically, the following version support will be necessary:

  • parse dpkg versions
  • compare dpkg versions
  • express and evaluate version constraints

Add node matcher

Support the imgbom node cataloger by allowing for matching of vulnerabilities from all possible sources.

  • github:npm data source (since CPE matching has not landed yet)

Add user image error handling

Currently when no image is passed by the user it results in a panic. This should be a useful error message instead.

Alpine matching issue

Given a Dockerfile:

FROM alpine:latest

RUN wget http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/libvncserver-0.9.11-r3.apk
RUN apk add  libvncserver-0.9.11-r3.apk

RUN sed -i 's/V:0.9.11-r3/V:0.9.9-r0/' /lib/apk/db/installed

grype should have discovered CVE-2019-20839, but did not. After a bit of digging, it seems the the DB shows no constraint for this entry for alpine:

sqlite> select * from vulnerability where id = "CVE-2019-20839";
CVE-2019-20839||libvncserver|rhel:7||rpm|null|[]
CVE-2019-20839||libvncserver|rhel:8||rpm|null|[]
CVE-2019-20839||libvncserver|debian:11|< 0.9.13+dfsg-1|dpkg|null|[]
CVE-2019-20839||libvncserver|debian:10||dpkg|null|[]
CVE-2019-20839||libvncserver|debian:9||dpkg|null|[]
CVE-2019-20839||libvncserver|debian:unstable|< 0.9.13+dfsg-1|dpkg|null|[]
CVE-2019-20839||libvncserver|debian:8|< 0.9.9+dfsg2-6.1+deb8u8|dpkg|null|[]
CVE-2019-20839||libvncserver|ubuntu:19.10||dpkg|null|[]
CVE-2019-20839||libvncserver|ubuntu:16.04|< 0.9.10+dfsg-3ubuntu0.16.04.5|dpkg|null|[]
CVE-2019-20839||libvncserver|ubuntu:18.04|< 0.9.11+dfsg-1ubuntu1.3|dpkg|null|[]
CVE-2019-20839||libvncserver|ubuntu:20.04|< 0.9.12+dfsg-9ubuntu0.2|dpkg|null|[]
CVE-2019-20839||libvncserver|nvd|< 0.9.13|unknown|["cpe:2.3:a:libvncserver_project:libvncserver:*:*:*:*:*:*:*:*"]|[]
...

even though the pull cache has a node configuration specifying vulnerable=true for < 0.9.13:

nvdv2-nvdv2:cves-147.json-  "configurations": {
nvdv2-nvdv2:cves-147.json-   "CVE_data_version": "4.0",
nvdv2-nvdv2:cves-147.json-   "nodes": [
nvdv2-nvdv2:cves-147.json-    {
nvdv2-nvdv2:cves-147.json-     "cpe_match": [
nvdv2-nvdv2:cves-147.json-      {
nvdv2-nvdv2:cves-147.json-       "cpe23Uri": "cpe:2.3:a:libvncserver_project:libvncserver:*:*:*:*:*:*:*:*",
nvdv2-nvdv2:cves-147.json-       "versionEndExcluding": "0.9.13",
nvdv2-nvdv2:cves-147.json-       "vulnerable": true
nvdv2-nvdv2:cves-147.json-      }
nvdv2-nvdv2:cves-147.json-     ],
nvdv2-nvdv2:cves-147.json-     "operator": "OR"
nvdv2-nvdv2:cves-147.json-    },
nvdv2-nvdv2:cves-147.json-    {
nvdv2-nvdv2:cves-147.json-     "cpe_match": [
nvdv2-nvdv2:cves-147.json-      {
nvdv2-nvdv2:cves-147.json-       "cpe23Uri": "cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*",
nvdv2-nvdv2:cves-147.json-       "vulnerable": true
nvdv2-nvdv2:cves-147.json-      }
nvdv2-nvdv2:cves-147.json-     ],
nvdv2-nvdv2:cves-147.json-     "operator": "OR"
nvdv2-nvdv2:cves-147.json-    },
nvdv2-nvdv2:cves-147.json-    {
nvdv2-nvdv2:cves-147.json-     "cpe_match": [
nvdv2-nvdv2:cves-147.json-      {
nvdv2-nvdv2:cves-147.json-       "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*",
nvdv2-nvdv2:cves-147.json-       "vulnerable": true
nvdv2-nvdv2:cves-147.json-      },
nvdv2-nvdv2:cves-147.json-      {
nvdv2-nvdv2:cves-147.json-       "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*",
nvdv2-nvdv2:cves-147.json-       "vulnerable": true
nvdv2-nvdv2:cves-147.json-      }
nvdv2-nvdv2:cves-147.json-     ],
nvdv2-nvdv2:cves-147.json-     "operator": "OR"
nvdv2-nvdv2:cves-147.json-    },
nvdv2-nvdv2:cves-147.json-    {
nvdv2-nvdv2:cves-147.json-     "cpe_match": [
nvdv2-nvdv2:cves-147.json-      {
nvdv2-nvdv2:cves-147.json-       "cpe23Uri": "cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*",
nvdv2-nvdv2:cves-147.json-       "vulnerable": true
nvdv2-nvdv2:cves-147.json-      },
nvdv2-nvdv2:cves-147.json-      {
nvdv2-nvdv2:cves-147.json-       "cpe23Uri": "cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*",
nvdv2-nvdv2:cves-147.json-       "vulnerable": true
nvdv2-nvdv2:cves-147.json-      }
nvdv2-nvdv2:cves-147.json-     ],
nvdv2-nvdv2:cves-147.json-     "operator": "OR"
nvdv2-nvdv2:cves-147.json-    }
nvdv2-nvdv2:cves-147.json-   ]
nvdv2-nvdv2:cves-147.json-  },
nvdv2-nvdv2:cves-147.json-  "cve": {
nvdv2-nvdv2:cves-147.json-   "CVE_data_meta": {
nvdv2-nvdv2:cves-147.json-    "ASSIGNER": "[email protected]",
nvdv2-nvdv2:cves-147.json:    "ID": "CVE-2019-20839"
nvdv2-nvdv2:cves-147.json-   },

...

Write golang matcher against NVD

Currently golang does not have a matcher, however, we could use NVD as a source (@zhill I apparently lied in our previous conversation). (See below for a good example from the NVD data)

The only catch is that we need to go back into syft and make a decision if the package name should be the go module path (Name=github.com/hashicorp/terraform) or if we break out the fields accordingly (Name=terraform, keep the original go module path and all other metadata in the package.Metadata field).

{
  "configurations": {
   "CVE_data_version": "4.0",
   "nodes": [
    {
     "cpe_match": [
      {
       "cpe23Uri": "cpe:2.3:a:hashicorp:terraform:*:*:*:*:*:*:*:*",
       "versionEndExcluding": "0.12.17",
       "vulnerable": true
      }
     ],
     "operator": "OR"
    }
   ]
  },
  "cve": {
   "CVE_data_meta": {
    "ASSIGNER": "[email protected]",
    "ID": "CVE-2019-19316"
   },
   "data_format": "MITRE",
   "data_type": "CVE",
   "data_version": "4.0",
   "description": {
    "description_data": [
     {
      "lang": "en",
      "value": "When using the Azure backend with a shared access signature (SAS), Terraform versions prior to 0.12.17 may transmit the token and state snapshot using cleartext HTTP."
     }
    ]
   },
   "problemtype": {
    "problemtype_data": [
     {
      "description": [
       {
        "lang": "en",
        "value": "CWE-327"
       }
      ]
     }
    ]
   }
  },
  "cvss_v2": {
   "additional_information": {
    "ac_insuf_info": false,
    "obtain_all_privilege": false,
    "obtain_other_privilege": false,
    "obtain_user_privilege": false,
    "user_interaction_required": false
   },
   "base_metrics": {
    "access_complexity": "MEDIUM",
    "access_vector": "NETWORK",
    "authentication": "NONE",
    "availability_impact": "NONE",
    "base_score": 4.3,
    "confidentiality_impact": "PARTIAL",
    "exploitability_score": 8.6,
    "impact_score": 2.9,
    "integrity_impact": "NONE"
   },
   "severity": "Medium",
   "vector_string": "AV:N/AC:M/Au:N/C:P/I:N/A:N",
   "version": "2.0"
  },
  "cvss_v3": {
   "base_metrics": {
    "attack_complexity": "LOW",
    "attack_vector": "NETWORK",
    "availability_impact": "NONE",
    "base_score": 7.5,
    "base_severity": "High",
    "confidentiality_impact": "HIGH",
    "exploitability_score": 3.9,
    "impact_score": 3.6,
    "integrity_impact": "NONE",
    "privileges_required": "NONE",
    "scope": "UNCHANGED",
    "user_interaction": "NONE"
   },
   "vector_string": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
   "version": "3.1"
  },
  "external_references": [
   {
    "source": "CONFIRM",
    "tags": [
     "Third Party Advisory"
    ],
    "url": "https://github.com/hashicorp/terraform/security/advisories/GHSA-4rvg-555h-r626"
   }
  ],
  "lastModifiedDate": "2019-12-13T17:39Z",
  "publishedDate": "2019-12-02T21:15Z",
  "severity": "High"
 }

Incorporate integration testing

Introduce integration-level testing that tests:

  • imgbom to vulnscan integrations: the appropriate package catalog is obtained from an underlying image
  • vulnscan to vulnscan-db: the appropriate store can be obtained from a flat file
  • internal vulnscan matching: given a catalog and vulnerability db obtained from standard execution paths, verify select matching results (direct exact matches, indirect matches, cpe matching, various vulnerability input sources appear in the results, etc)

Add RPM matcher

Support the imgbom rpmdb cataloger by allowing for matching of vulnerabilities from all possible sources.

Add matcher for APK packages against NVD + Alpine SecDB

Vulnerability matching for APK packages against NVD data as primary source with Alpine SecDB records serving as a whitelist for backport fixes.

Expected logic:
Match apk name and version against NVD records excluding matches for which there is an explicit fix present in the Alpine secdb data.

Add basic pipeline

Should be able to support:

  • lint checks
  • unit testing
  • integration testing

Invoked on at least:

  • open PRs
  • new commits

Improve test coverage to >= 80%

Once coverage is at a good threshold, add a quality gate to the pipeline to prevent regression of coverage below a threshold.

Add progress UI + event bus

This should be similar to the patterns implemented in imgbom:internal/ui and make visible progress of application tasks, such as downloading db updates, loading the image, inventorying the image, and matching vulnerabilities.

Add "confidence indication" to vulnerability matches

Currently each match has a type (direct, indirect, fuzzy, etc), however, we can go a step further by adding a quantified number between 0-1 that indicates how "sure" we are that the match is legitimate based on a wide variety of factors (the vulnerability data source, how close the match was to package metadata, if any generated/guessed data was used to make the match, etc).

It's not quite clear how the formula for the confidence should be determined quite yet or how it would be useful for an end user. Up for thoughts, comments, and suggestions!

What's a more elegant way to handle errors?

There's a scenario where grype will show an error to the user, related to an issue parsing a package version, and then will exit 0, despite the apparent error.

This display of an error is jarring and confusing to users. We should talk through what the definition of the "correct way to handle this scenario" is.

In general, I'd suggest we not show errors and then exit cleanly. And for this specific Malformed version error, we should answer a few relevant questions:

  1. Should this Go error be occurring in the first place? If so,
  2. Is a malformed package version something that should be reported to the user? If so,
  3. If we're saying the analysis finished despite this apparent problem, should this be a WARN instead of an ERROR?

Screen Shot 2020-08-05 at 10 21 28 AM

Error text:

[0012] ERROR matcher failed for pkg=Pkg(type=wheel, name=toastedmarshmallow, version=2.15.2.post1): matcher failed to parse version pkg='toastedmarshmallow' ver='2.15.2.post1': unable to crate semver obj: Malformed version: 2.15.2.post1

Add documentation

  • Readme: how to use it, small examples, basic configuration, terse explanation
  • Contributing: (this is in another issue)
  • Issue templates?

Match packages by CPE

Add capability to match packages by CPE instead of package name and version. Additionally, this should setup for generating speculative CPEs from a package object.

Potential future ticket:

  • complexity on resolving to concrete types may be difficult for old data

Add java matcher

Support the imgbom java cataloger by allowing for matching of vulnerabilities from all possible sources.

Note: the version format is fairly unknown here (not necessarily semver).

Splitting this up into the package manager types themselves may help here with regard to the version format comparison.

Add basic distro vulnerability matcher

This will implement the first matcher (dpkg) for finding an exact package name match that fits a vulnerability version constraint for a particular distro.

Suppress "can't find matcher" errors

Currently if there is a go package found by syft, grype will complain about no matcher being found. This specific message (not just for go) should be switched to a warning log level (which needs to be added to the logger interface) and not be displayed to the ETUI.

Extend vulnerability matchers

Issue #14 describes the initial matchers needed. This ticket should be picked up once all those are in place, to extend further:

  • pacman (archlinux)
  • poetry
  • pipenv
  • yarn

Finalize json presenter schema

There are several fields that need to be finalized (e.g. cve should be generically vulnerability) in which case we should be providing a json schema for the json presenter.

Download and curate a vulnscan-db flat file

Should be able to:

  • download a new db file (single file)
  • detect stale dbs (consider versions / schema check)
  • cmd CLI features to allow users to update/delete/see the cache

Consideration (possibly in this ticket or a future ticket):

  • should consider multiple files to be curated
  • synchronizing of multiple files (detecting staleness via timestamps across multiple files)

Python version parser is needed

The semantic versioning parsing that Grype has can't handle the non-semantic versions that Python allows. More specifically, anything that is after a formal release (e.g. 1.0.0-post1) and pre-release variations like 1.0.0-dev1.

The spec is:

[N!]N(.N)*[{a|b|rc}N][.postN][.devN]

And it is fully documented here https://www.python.org/dev/peps/pep-0440/

Fixing this issue, should allow #90 to be correctly fixed for Python.

Add general release scripts and pipeline support

Similar to issue #1, a brew installer needs to be put into place that will make this tool installable on OSX.

This will require a new repository that will act as the "tap" (a.k.a. "third party repository"). Which requires the name to be fully decided on:

On GitHub, your repository must be named homebrew-something in order to use the one-argument form of brew tap. The prefix ‘homebrew-‘ is not optional. (The two-argument form doesn’t have this limitation, but it forces you to give the full URL explicitly.)

Add remaining vulnerability matchers

Note: split this into multiple tickets as needed

Should support all analyzers from the imgbom tool/lib:

  • java
  • python egg & wheel (#16)
  • gem (#15)
  • node
  • apk
  • rpmdb
  • dpkg (#3)

TODO: split this up into individual tickets

Support multiple DB distribution archive types

Currently it is assumed that all db and metadata should be packaged into a single archive for distribution. However, it may be advantageous to have multiple archive types for various purposes to be downloaded separately by clients.

logging needs name identifier

It is hard to tell apart where logging statements are coming from. For example in this output:

/vulnscan ᓆ go run main.go -v dir:///Users/alfredo/.vimrc
creating catalog
[ERROR]	path (/Users/alfredo/.vimrc/var/lib/dpkg/status) is not valid: stat /Users/alfredo/.vimrc/var/lib/dpkg/status: not a directory
[ERROR]	path (/Users/alfredo/.vimrc/var/lib/rpm/Packages) is not valid: stat /Users/alfredo/.vimrc/var/lib/rpm/Packages: not a directory

Those error log statements are actually coming from imgbom

Add support for OR fuzzy constraint operator

When running against amir20/clashleaders there are several constraint errors:

[ERROR]	matcher failed for pkg=Pkg(type=npm, name=request, version=2.88.0): matcher failed to fetch language='javascript' pkg='request': provider failed to parse language='javascript': failed to parse constraint='>=2.49.0,<2.68.0 || >=2.2.6,<2.47.0' format='UnknownFormat': could not create fuzzy constraint: '||' operator (OR) is unsupported for constraints

we should add support for the || operator for the fuzzy version type.

Create presenter for reporting matches

Similarly to imgbom, the presenter should allow different formats, but might be fine to start with these:

  • JSON

Split into a future ticket:

  • tabular (pretty)

UI mixes logging output with progress, misses newlines

A few issues from the below image that need to be fixed:

  • There is no output right after calling on a container:tag, about 4 empty newlines are added before the first useful output
  • When Scanning image... is displayed, it doesn't end with a newline, which causes the log output of [ERROR] no matchers... to be displayed right next to it
  • The "No vulnerabilities found" doesn't have a new line at the end, causing ZSH to display it with a white-block and % character

Screen Shot 2020-07-31 at 9 30 33 AM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.