Git Product home page Git Product logo

trustify's Introduction

trustify

ci

Quick start

Let's call this "PM mode":

cargo run --bin trustd

That will create its own database on your local filesystem.

Running containerized UI

You can also fire up the UI using:

podman run --network="host" --pull=always \
-e TRUSTIFY_API_URL=http://localhost:8080 \
-e OIDC_CLIENT_ID=frontend \
-e OIDC_SERVER_URL=http://localhost:8090/realms/trustify \
-e ANALYTICS_ENABLED=false \
-e PORT=3000 \
-p 3000:3000 \
ghcr.io/trustification/trustify-ui:latest

Open the UI at http://localhost:3000

Repository Organization

Sources

common

Model-like bits shared between multiple contexts.

entity

Database entity models, implemented via SeaORM.

migration

SeaORM migrations for the DDL.

modules/graph

The primary graph engine and API.

modules/importer

Importers capable of adding documents into the graph.

modules/ingestor

Ingestors/readers for various formats (SPDX, CSAF, CVE, OSV, etc, etc)

server

The REST API server.

trustd

The server CLI tool trustd

Et Merde

etc/deploy

Deployment-related (such as compose) files.

etc/test-data

Arbitrary test-data.

Development Environment

Postgres

Unit tests and "PM mode" use an embedded instance of Postgres that is installed as required on the local filesystem. This is convenient for local development but you can also configure the app to use an external database.

Starting a containerized Postgres instance:

podman-compose -f etc/deploy/compose/compose.yaml up

Connect to PSQL:

env PGPASSWORD=eggs psql -U postgres -d trustify -h localhost -p 5432

If you don't have the psql command available, you can also use the podman-compose command:

podman-compose -f etc/deploy/compose/compose.yaml exec postgres psql -U postgres -d trustify

Point the app at an external db:

cargo run --bin trustd api --help
RUST_LOG=info cargo run --bin trustd api --db-password eggs --devmode --auth-disabled

If test failures on OSX

Potentially our concurrent Postgres installations during testing can exhaust shared-memory. Adjusting shared-memory on OSX is not straight-forward. Use this guide.

Import some data

Import data (also see: modules/importer/README.md for more options):

# SBOM's
http POST localhost:8080/api/v1/importer/redhat-sbom sbom[source]=https://access.redhat.com/security/data/sbom/beta/ sbom[keys][]=https://access.redhat.com/security/data/97f5eac4.txt#77E79ABE93673533ED09EBE2DCE3823597F5EAC4 sbom[disabled]:=false sbom[onlyPatterns][]=quarkus sbom[period]=30s sbom[v3Signatures]:=true
# CSAF's
http POST localhost:8080/api/v1/importer/redhat-csaf csaf[source]=https://redhat.com/.well-known/csaf/provider-metadata.json csaf[disabled]:=false csaf[onlyPatterns][]="^cve-2023-" csaf[period]=30s csaf[v3Signatures]:=true

To import files from a local disk or a location that is not properly-formed csaf repository, use csaf walker tool:

sbom scoop http://localhost:8080/api/v1/sbom /workspace/github.com/trustification/trustification/data/ds1/sbom/
csaf scoop http://localhost:8080/api/v1/advisory /workspace/github.com/trustification/trustification/data/ds1/csaf/

Authentication

By default, authentication is enabled. It can be disabled using the flag --auth-disabled when running the server. Also. by default, there is no working authentication/authorization configuration. For development purposes, one can use --devmode to use the Keycloak instance deployed with the compose deployment.

Also see: docs/oidc.md

HTTP requests must provide the bearer token using the Authorization header. For that, a valid access token is required. There are tutorials using curl on getting such a token. It is also possible the use the oidc client tool:

Installation:

cargo install oidc-cli

Then, set up an initial client (needs to be done every time the client/keycloak instance if re-created):

oidc create confidential --name trusty --issuer http://localhost:8090/realms/chicken --client-id walker --client-secret ZVzq9AMOVUdMY1lSohpx1jI3aW56QDPS

Then one can perform http request using HTTPie like this:

http localhost:8080/purl/asdf/dependencies Authorization:$(oidc token trusty -b)

Notes on models

Package

A package exists or it does not. Represented by a pURL. No source-tracking required.

Rework to Package. VersionedPackage. QualifiedVersionedPackage. and VersionRangePackage for vulnerable references. Plus appropriate junction tables.

CPE

Platonic form of a product may have 0+ CPEs/pURLs. Platonic form of a product may have 0+ known hashable artifacts.

CVE

A CVE exists or it does not. Represented by an identifier. No source-tracking required.

CWE

A CWE exists or it does not. Represented by an identifier. No source-tracking required.

Advisory

An Advisory exists or it does not. Represented by a location/hash/identifier. Source tracked from an Advisory Source.

There is probably always an advisory from NVD for every CVE.

Advisory Source

Something like GHSA, Red Hat VEX, etc. Maybe? Based on source URL? Regexp! Still unsure here.

Scanners don't exist

They should just point us towards first order advisories to ingest. OSV just tells us to look elsewhere. They are helpers not nouns.

Vulnerable

Package Range + Advisory + CVE.

NonVulnerable

QualifiedPackage + Advisory + CVE.

Both impl'd for pURL and CPE.

SBOM

hashed document that claims things about stuff. All package/product relationships exist only within the context of an SBOM making the claim.

Describes

CPE (Product?) and/or pURLs described by the SBOM

trustify's People

Contributors

ctron avatar bobmcwhirter avatar jcrossley3 avatar helio-frota avatar chirino avatar gildub avatar dejanb avatar carlosthe19916 avatar jimfuller-redhat avatar crsche avatar noodlesamachan avatar desmax74 avatar mrizzi avatar

Stargazers

Sundaram krishnan avatar R avatar Brian Heineman avatar  avatar Geoffrey Mureithi avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar Kevin Conner avatar  avatar  avatar Alan Field avatar  avatar Jiří Kučera avatar  avatar  avatar  avatar

trustify's Issues

Remove version-specific naming around CPEs

Seems there is free conversion from two versions of CPE, so no need to maintain versioned tables (yet) for CPE. Instead, we should rename and accommodate whatever superset of columns is required.

Importer[period] should be a number and not a string

Current status

  • From the README.md file the period can be written as 60s to express "60" seconds.
http POST localhost:8080/api/v1/importer/redhat-csaf csaf[period]=60s
  • The list of importers at http://localhost:8080/api/v1/importer can return period=1m
[
    {
        "name": "redhat-csaf",
        "configuration": {
            "csaf": {
                "disabled": false,
                "period": "1m",
                "source": "https://redhat.com/.well-known/csaf/provider-metadata.json",
                "v3Signatures": false
            }
        },
        "state": "waiting",
        "lastChange": "2024-04-16T06:32:13.941898Z",
        "lastRun": "2024-04-16T06:32:11.428871Z",
        "lastError": "Visitor error: Invalid signature: Invalid key: \"Subkey of 77E79ABE93673533ED09EBE2DCE3823597F5EAC4 not bound: No binding signature at time 2024-04-16T04:20:22Z\""
    }
]

Problem

There is no unique way of expressing period.
period should be a number (seconds, milliseconds or whatever else) as an Input and Output. The Clients (e.g. UI) and Backend should then translate the number to the format of their preference.

Reduce the number of redundant SQL statements inserting packages

Navigating through the Graph code, I see the following pattern:

  • get_foo -> if Some -> return foo
  • ingest parent of foo
  • add foo to parent
    • get_foo -> if Some -> return foo
    • insert foo -> return foo

That patterns nests into the parent structures.

Without having a concrete idea, I have the expectation that we could at least eliminate the second get. As for a medium sized SBOM we have ~8000 of those calls, I think it makes sense to improve the performance.

Generalize the inbound supplychain document pipeline

yes, RHT is CSAF, but... others do otherwise.
hopefully also help us generalize the fetch/ingest/store pipeline across formats.
so we're not having to mirror osv_walker like csaf_walker like ghsa_walker like msftadvisory_walker, like...
Walker<CSAF, Http, S3>::new()...
Walker<OSV, FileSystem, FileSystem>::new()
or whatnot
Walker<FORMAT, SOURCE, STORAGE_DEST>

Ensure correct error handling for str-purl conversions

          I think the answer is TryFrom instead since failure is an option.

Originally posted by @bobmcwhirter in #3 (comment)

Converting to TryFrom has rippling effects that bring into question the existence of both Purl and PackageUrl types, as well as their corresponding PurlErr and packageurl::Error types, not to mention the system::Error type.

Running frontend fails on OSX with node LTS.

Followed the instructions in the README.md and something about webpack failed and nothing is listening on port 3000.

bob@macbox trustify % cd frontend
bob@macbox frontend % ls
Dockerfile		README.md		branding		client			common			entrypoint.sh		package-lock.json	package.json		scripts			server
bob@macbox frontend % npm clean-install --ignore-scripts


npm WARN deprecated [email protected]: Modern JS already guarantees Array#sort() is a stable sort, so this library is deprecated. See the compatibility table on MDN: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#browser_compatibility
npm WARN deprecated [email protected]: Use your platform's native atob() and btoa() methods instead
npm WARN deprecated [email protected]: Use your platform's native DOMException instead

added 1508 packages, and audited 1512 packages in 18s

387 packages are looking for funding
  run `npm fund` for details

3 moderate severity vulnerabilities

Some issues need review, and may require choosing
a different dependency.

Run `npm audit` for details.
npm notice
npm notice New minor version of npm available! 10.2.4 -> 10.5.0
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.5.0
npm notice Run npm install -g [email protected] to update!
npm notice
bob@macbox frontend %
bob@macbox frontend % npm run start:dev


> @trustification-ui/[email protected] start:dev
> concurrently -n port-forward,common,client -c 'white.bold.inverse,green.bold.inverse,blue.bold.inverse' 'npm:port-forward' 'npm:start:dev:common' 'npm:start:dev:client'

[common]
[common] > @trustification-ui/[email protected] start:dev:common
[common] > npm run start:dev -w common
[common]
[port-forward]
[port-forward] > @trustification-ui/[email protected] port-forward
[port-forward] > concurrently -c auto 'npm:port-forward:*'
[port-forward]
[client]
[client] > @trustification-ui/[email protected] start:dev:client
[client] > npm run start:dev -w client
[client]
[client]
[client] > @trustification-ui/[email protected] start:dev
[client] > NODE_ENV=development webpack serve --config ./config/webpack.dev.ts
[client]
[common]
[common] > @trustification-ui/[email protected] start:dev
[common] > NODE_ENV=development rollup -c --watch
[common]
[port-forward] [hub]
[port-forward] [hub] > @trustification-ui/[email protected] port-forward:hub
[port-forward] [hub] > kubectl port-forward svc/trustification-hub -n trustification 9002:8080
[port-forward] [hub]
[port-forward] [hub] sh: kubectl: command not found
[port-forward] [hub] npm run port-forward:hub exited with code 127
[port-forward] npm run port-forward exited with code 1
[common] Using branding assets from: /Users/bob/repos/trustification/trustify/frontend/branding
[common] rollup v4.12.0
[common] bundles src/index.ts → dist/index.mjs, dist/index.cjs...
[client] [webpack-cli] Failed to load '/Users/bob/repos/trustification/trustify/frontend/client/config/webpack.dev.ts' config
[client] [webpack-cli] config/webpack.dev.ts(18,8): error TS2307: Cannot find module '@trustification-ui/common' or its corresponding type declarations.
[client]
[client] npm ERR! Lifecycle script `start:dev` failed with error:
[client] npm ERR! Error: command failed
npm ERR!   in workspace: @trustification-ui/[email protected]
[client] npm ERR!   at location: /Users/bob/repos/trustification/trustify/frontend/client
[client] npm run start:dev:client exited with code 1
[common] created dist/index.mjs, dist/index.cjs in 787ms

Because otherwise tests will fail. Having `init` will run that methods twice (for two different tests) and the second one will fail. That's why the CI just failed, because the logger was already initialized by the first test.

          Because otherwise tests will fail. Having `init` will run that methods twice (for two different tests) and the second one will fail. That's why the CI just failed, because the logger was already initialized by the first test.

Originally posted by @ctron in #81 (comment)

Let's use the test-log crate to obviate this, similar to what we've done in other workspaces.

Improve performance

Importing a single SBOM took:

[2024-04-16T11:07:11.134Z INFO  trustify_module_ingestor::service::sbom] Ingested - took 57m 28s 566ms 326us 659ns

I don't have any detailed information which one this was as the log was full of warnings. However, I do think that we need gather some data and start working on it.

Tracking:

Hash more than just SHA256

Some day SHA256 will be broken maybe, so let's go ahead and hash SHA384 and whatever else seems sensible.

Consistent API paths

Some stuff is under /api/v1

Some stuff is right at the root.

We should be more consistent.

Promote `backend/*` to the root of the repo

In light of #47

Having a Cargo.toml at the root simplifies the dev cycle.

Further, let's put everything that's not a cargo workspace beneath a single directory. Do we like etc/, var/ or something else?

Importer POST /api/v1/importer/{name} should be /api/v1/importer/

At the moment we are using a Path Parameter to define the name of an importer. Although it works, we should follow common practices when we define our endpoints, here is a basic example:

  • GET /personas: List of personas
  • POST /personas: Create a single persona. Any info required goes in the body of the POST request
  • GET /personas/{id}: get a single persona
  • PUT /personas/{id}: update a persona
  • DELETE /personas/{id}: delete a persona

So /api/v1/importer/{name} should be changed by /api/v1/importer/ and the name should be defined in the body of the request.

trustify/server build error

cargo test

error: failed to run custom build command for `sequoia-openpgp v1.19.0`
Caused by:
  process didn't exit successfully: `Desktop/tc/trustify/target/debug/build/sequoia-openpgp-49fc058589fa4216/build-script-build` (exit status: 1)
  --- stderr
  No cryptographic backend selected.

  Sequoia requires a cryptographic backend.  This backend is selected at compile
  time using feature flags.

  See https://crates.io/crates/sequoia-openpgp#crypto-backends

nettle related ? #61

Make db::DbStrategy go away

In reality, with the exception of for_test(...) we don't use db::DbStrategy::Managed(Arc<...>), while we do use config::DbStrategy::Managed but ultimately manage it from the trustd wrapper around the DB. The Db we construct and give to Graph is External from its POV, which makes complete sense.

I suspect we should rework the for_test(...) logic to be more test-fixure-y, and less ingrained in Db/Graph logic.

How do I see the swagger-ui?

I think this should be enough...

cargo run --bin trustd -- --auth-disabled

...to make this url functional in a browser:

http://localhost:8080/swagger-ui/openapi.json

It doesn't currently work. Let's fix either it or my expectations, thanks!

Warning about non-fully-qualified package

Some SBOMs provide a long list of messages like:

[2024-04-16T11:18:45.031Z WARN  trustify_module_graph::graph::sbom] unable to ingest relationships involving a non-fully-qualified package pkg://generic/libnestegg
[2024-04-16T11:18:45.054Z WARN  trustify_module_graph::graph::sbom] unable to ingest relationships involving a non-fully-qualified package pkg://generic/libnestegg
[2024-04-16T11:18:46.405Z WARN  trustify_module_graph::graph::sbom] unable to ingest relationships involving a non-fully-qualified package pkg://golang/sync/atomic
[2024-04-16T11:18:46.428Z WARN  trustify_module_graph::graph::sbom] unable to ingest relationships involving a non-fully-qualified package pkg://golang/sync/atomic
[2024-04-16T11:18:46.452Z WARN  trustify_module_graph::graph::sbom] unable to ingest relationships involving a non-fully-qualified package pkg://golang/sync/atomic
[2024-04-16T11:18:46.475Z WARN  trustify_module_graph::graph::sbom] unable to ingest relationships involving a non-fully-qualified package pkg://golang/sync/atomic

Each entry takes seconds to process. I am wondering if it is necessary to process those entries multiple times? And of course the question is: why is that so? what can a user do? how can we properly report this? (ok, more than one question).

Explore alternative table structure for packages

Right now ingesting SBOMs slow (see #165). A first investigation shows that a lot of traffic happens on the tables around packages (package versions, qualified packages, …). I think it makes sense exploring alternative ways of storing this information.

Ideas:

  • flat table: package url, split up, qualifiers as JSON field (?)
  • ltree: might not work as we don't have actual "labels"
  • using values as primary key (vs auto-increments), not sure this works

One aspect of this is also that question if a package "belongs" to an SBOM. Or is just referenced by an SBOM. The question is, what will happen when an SBOM gets deleted. IMHO this should remove all packages stored in the database as well.

REST Endpoint: Advisories

We need an endpoint e.g: GET /advisories That returns a paginated list of advisories. The endpoint should get as an input:

  • offset, limit (for pagination)
  • sorting
  • Filtering (let's also discuss which are the filters we are planning to support. So far I think severity is a good one)

The endpoint should return as an output:

  • List of items that match the input criteria
  • Number of total items that match the input criteria

The data used will serve to populate the following table

Screenshot from 2024-04-02 15-25-54

DRY up the query code

          I've also used this pattern, and while not on your shoulders, I feel like we are repeating ourselves a lot, and perhaps @jcrossley3 could get us some more ergonomics around using search/sort.

Originally posted by @bobmcwhirter in #167 (comment)

Swagger: define importer schemas

It is not possible to know which is the Body required to be send on the importers endpoints:

Screenshot from 2024-04-12 15-51-35

This affects multiple endpoints related to the importers

AuthN/AuthZ

What's needed for authentication?

Do we need a KeyCloak? Can we defer (for dev at least?) to GH OAuth or similar?

Is it possible to simplify Database::for_test ?

trustify git:(main) rg "Database\:\:for_test"
modules/ingestor/src/service/cve/loader.rs
80:        let db = Database::for_test("ingestors_cve_loader").await?;

modules/importer/src/test.rs
30:    let db = Database::for_test("test_default").await.unwrap();
125:    let db = Database::for_test("test_oplock").await.unwrap();

modules/ingestor/src/service/advisory/osv/loader.rs
154:        let db = Database::for_test("ingestors_osv_loader").await?;

modules/ingestor/src/endpoints/advisory.rs
74:        let db = Database::for_test("upload_advisory_csaf").await?;
101:        let db = Database::for_test("upload_advisory_osv").await?;
128:        let db = Database::for_test("upload_unknown_format").await?;

modules/graph/src/graph/vulnerability/mod.rs
146:        let db = Database::for_test("ingest_cve").await?;
173:        let db = Database::for_test("get_advisories_from_cve").await?;

modules/graph/src/graph/advisory/mod.rs
469:        let db = Database::for_test("ingest_advisories").await?;
507:        let db = Database::for_test("ingest_affected_package_version_range").await?;
564:        let db = Database::for_test("ingest_fixed_package_version").await?;
624:        let db = Database::for_test("ingest_advisory_cve").await?;
651:        let db = Database::for_test("advisory_affected_vulnerability_assertions").await?;
692:        let db = Database::for_test("advisory_not_affected_vulnerability_assertions").await?;

modules/graph/src/graph/sbom/mod.rs
685:        let db = Database::for_test("ingest_sboms").await?;
710:        let db = Database::for_test("ingest_and_fetch_sboms_describing_purls").await?;
760:        let db = Database::for_test("ingest_and_locate_sboms_describing_cpes").await?;
810:        let db = Database::for_test("transitive_dependency_of").await?;
879:        let db = Database::for_test("ingest_contains_packages").await?;
951:        let db = Database::for_test("sbom_vulnerabilities").await?;

modules/graph/src/graph/package/package_version.rs
218:        let db = Database::for_test("package_version_not_affected_assertions").await?;

modules/graph/src/graph/sbom/spdx.rs
174:        let db = Database::for_test("parse_spdx_quarkus").await?;
228:        let db = Database::for_test("parse_spdx_openshift").await?;
280:        let db = Database::for_test("parse_spdx").await?;

modules/graph/src/graph/package/mod.rs
618:        let db = Database::for_test("ingest_packages").await?;
651:        let db = Database::for_test("ingest_package_versions_missing_version").await?;
676:        let db = Database::for_test("ingest_package_versions").await?;
711:        let db = Database::for_test("get_versions_paginated").await?;
768:        let db = Database::for_test("ingest_qualified_packages_transactionally").await?;
808:        let db = Database::for_test("ingest_qualified_packages").await?;
866:        let db = Database::for_test("ingest_package_version_ranges").await?;
918:        let db = Database::for_test("package_affected_assertions").await?;
1011:        let db = Database::for_test("package_not_affected_assertions").await?;
1066:        let db = Database::for_test("package_vulnerability_assertions").await?;
1130:        let db = Database::for_test("advisories_mentioning_package").await?;

modules/graph/src/graph/package/qualified_package.rs
142:        let db = Database::for_test("vulnerability_assertions").await?;

It works with the same database name

let db = Database::for_test("a").await?;

#136 (comment)

But I don't know the reason so I'm asking, thanks 👍

Documents having their own document ID need to be grouped

For documents which have their own document ID and which can change over time (like CSAF documents), there must be a way to get that latest document by its document ID. When searching, only the latest document must be searched and returned (unless we want an additional API to search through the history).

Querying dates is broken

Can't consider every value a String, obviously. Probably need to use sea_query::Value instead...

[jim@fedora trustify]$ http localhost:8080/api/v1/search/advisory 'q==denial&published>2023-11-03'
HTTP/1.1 500 Internal Server Error
access-control-allow-credentials: true
access-control-expose-headers: content-type
content-encoding: gzip
content-type: application/json
date: Mon, 15 Apr 2024 15:51:47 GMT
transfer-encoding: chunked
vary: Origin, Access-Control-Request-Method, Access-Control-Request-Headers
vary: accept-encoding

{
    "error": "Internal",
    "message": "database error: Query Error: error returned from database: operator does not exist: timestamp with time zone > text"
}

Upload Advisory should only require the file, everything else should be determined by the server

Current status:

https://github.com/trustification/trustify/blob/52b2cc9dfaa325ffcfdaee30e38371e444443b1f/modules/ingestor/src/endpoints/advisory.rs#L14C1-L26C23

#[utoipa::path(
    tag = "ingestor",
    request_body = Vec <u8>,
    params(
        ("format" = String, Query, description = "Format of the submitted advisory document (`csaf`, `osv`, ...)"),
        ("location" = String, Query, description = "Source the document came from"),
    ),
    responses(
        (status = 201, description = "Upload a file"),
        (status = 400, description = "The file could not be parsed as an advisory"),
    )
)]
#[post("/advisories")]

Problem

  • Format & location query parameters are required to upload a file

It means that the UI can not just take a file and upload it. It requires the user fill a form to specify the format of the file he is uploading, which is in my opinion prone to error and not user friendly. Imagine yourself uploading a file using CURL but you can not just take a file and upload it, but you have to determine which is the format of it. The problem is for humans, probably not for other apps

Proposed solution

Don't spam CI reports

I find the warnings spamming the CI reports massively annoying and distracting.

My expectation would be that if there warnings like that, a PR won't get merged before they are resolved. Or, we disable them as we don't seem to care.

Right now, reviewing a PR and getting dozens of unrelated warnings is not helping.

Pull up infrastructure to `trustd`

Since a recent change the infrastructure part is no longer wrapping the lifecycle of the application, but only of the http part. That should be changed.

Whatever trustd run should be wrapped with the infrastructure stuff. That would also eliminate the need to use println.

Change all the `println` to use log `log::info | debug`

Tip

We can remove cases like this println!("abc");

api/src/system/mod.rs
121:        println!("bootstrap to {}", url);

cli/src/main.rs
29:                eprintln!("Error: {err}");
32:                        eprintln!("Caused by:");
34:                    eprintln!("\t{err}");

api/src/system/sbom/spdx.rs
41:                                    //println!("describes pkg {}", reference.reference_locator);
48:                                    //println!("describes cpe22 {}", reference.reference_locator);
67:                                        //println!("pkg_a: {}", package_a);
210:        println!("{:#?}", described_packages);
220:        println!("{}", contains.len());
226:        println!("parse {}ms", parse_time.as_millis());
227:        println!("ingest {}ms", ingest_time.as_millis());
228:        println!("query {}ms", query_time.as_millis());
263:        println!("{:#?}", described_packages);
273:        println!("{}", contains.len());
279:        println!("parse {}ms", parse_time.as_millis());
280:        println!("ingest {}ms", ingest_time.as_millis());
281:        println!("query {}ms", query_time.as_millis());
323:        println!("{}", contains.len());
329:        println!("parse {}ms", parse_time.as_millis());
330:        println!("ingest {}ms", ingest_time.as_millis());
331:        println!("query {}ms", query_time.as_millis());

api/src/system/sbom/mod.rs
621:            println!("no package");
776:        println!("a");
784:        println!("b");
792:        println!("c");
800:        println!("d");

api/src/system/advisory/csaf/mod.rs
54:                                    println!("{}", package.to_string());
120:        println!("{:#?}", assertions);

importer/src/sbom/process.rs
53:            println!(

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.