Git Product home page Git Product logo

wolfictl's People

Contributors

ajayk avatar cpanato avatar debasishbsws avatar deitch avatar dentrax avatar dependabot[bot] avatar developer-guy avatar dlorenc avatar eyecantcu avatar found-it avatar hectorj2f avatar imjasonh avatar jdolitsky avatar jedsalazar avatar jonjohnsonjr avatar joshrwolf avatar k4leung4 avatar kaniini avatar luhring avatar mattmoor avatar pdeslaur avatar pnasrat avatar priyawadhwa avatar puerco avatar rawlingsj avatar smoser avatar tstromberg avatar vaikas avatar wlynch avatar xnox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wolfictl's Issues

Ability to produce accurate VEX data for non-latest versions of packages

Today, the wolfictl vex ... commands can create VEX data for the latest version of any given Wolfi package.

But we need the ability in wolfictl to produce VEX data for non-latest versions of packages, too. This could be because we've pinned to a fixed version of a package, or because we need to get the latest VEX data for a downstream artifact that already has a particular version of a Wolfi package (perhaps it was the latest version at the time, but time has passed since the downstream artifact was built).

wolfictl vex package should have the option (e.g. a flag) to specify a particular version of the package in question, and it should be able to assemble a correct VEX document (and statement history within) to describe that version of the package.

And wolfictl vex sbom should be able to follow the same approach, but taking into account the version of the package documented in the SBOM, which may not be the latest version of the Wolfi package.

Note: This almost certainly means we need to evolve the advisory data structure somehow to describe how statements relate to versions more precisely.

Lint: please add "detected-spdx-license-missmatch" linter

Description

Please add a new lint check. Use a license detector (for example https://github.com/go-enry/go-license-detector/tree/master) to detect the SPDX license of given package.

Compare it to the declared SPDX license. And raise a warning if they missmatch.

Allow humans to fix the license to match to the detected one. Or override the lint check with #nolint in case of confusing / undetectable licensing.

Real world example on a large code base

$ time license-detector /tmp/gcc-13.2.0
/tmp/gcc-13.2.0
	99%	GCC-exception-3.1
	99%	LGPL-2.1-only
	99%	deprecated_LGPL-2.1
	99%	LGPL-2.1-or-later
	99%	deprecated_LGPL-2.1+
	98%	deprecated_GPL-3.0-with-GCC-exception
	97%	deprecated_GPL-2.0+
	97%	GPL-2.0-or-later
	97%	GPL-2.0-only
	97%	deprecated_GPL-2.0

real	0m4.400s
user	0m5.240s
sys	0m0.171s

Declared license in the package

$ git grep license gcc.yaml
gcc.yaml:    - license: GPL-3.0-or-later

Which is incorrect.

Advisory data: tombstone events for withdrawn packages

Context

We recently added some much needed validation of our advisory data into wolfictl, which is used as a CI check in our advisories repos. The validation rules relevant to this proposal are:

  1. "Fixed versions" of packages must exist in the relevant APKINDEX.
  2. "Fixed versions" must not be the first version of a package in the APKINDEX.
  3. Existing event data must not be removed or modified. Updating the status of an advisory should be achieved via appending a new event to the end of the sequence.

Rules 1 and 2 are checked across all data in the advisories repo. Rule 3 is a function of what was changed in the current PR (relative to the designated fork point).

Meanwhile... we also withdraw packages (i.e. specific APK files) from the distro from time to time.

This results in an unpleasant side effect where: a new fixed event can be added that's valid because the package version exists, then the package version is withdrawn, and then validation runs again and fails.

Proposal

(Credit to @jonjohnsonjr for this idea 🧠 )

To allow our advisory data entry workflow to satisfy our validation checks, continue with our transparent "append only" philosophy, and account for withdrawn packages, we could create a new event type to act as a tombstone entry, which says that a previously referenced fixed version no longer exists.

The impact on downstream data transformation, and on the secdb in particular, would be that we no longer report that fixed version for the advisory — the fixed information is effectively reverted to its state prior to the original fixed event.

We would update our validation rules such that:

  1. A fixed event is allowed to refer to non-existent APK version, as long as there exists an event later in the advisory's event sequence that's the "tombstone" event for that package version.
  2. A tombstone event must specify a package version that's referenced earlier in the advisory's event sequence.

It would also be great to have the dev tooling and automation help us, such as by automatically adding tombstone events as needed at time of package withdrawal.

advisory discover: cache NVD API query results

Querying NVD's API is expensive. Even in the best case, where the user has an API key, we can only make requests at ~1.7 reqs/sec. And today Wolfi has ~1400 package definitions according to wolfictl ls.

NVD's response data for a given request (CPE) is unlikely to change frequently. We should consider caching API responses locally for some duration of time (e.g. 24 hours). This would greatly speed up the total runtime of the wolfictl advisory discover command.

And meanwhile, even with cached data, we would still be able to:

  • Detect new matches using cached data, e.g. when a new version stream is added
  • Detect new matches using uncached data for newly added distro packages

Woflictl seg faults if non existent command options is added

Built wolfictl from source at 0e133a7

 wolfictl lint yam
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x127560a]

goroutine 1 [running]:
github.com/wolfi-dev/wolfictl/pkg/melange.ReadAllPackagesFromRepo.func1({0x7ffe8439b1c3, 0x3}, {0x0?, 0x0?}, {0xc00087e730?, 0x4291a5?})
	/home/strongjz/Documents/code/go/src/github.com/wolfi/wolfictl/pkg/melange/melange.go:71 +0x4a
path/filepath.Walk({0x7ffe8439b1c3, 0x3}, 0xc00087e820)
	/usr/local/bin/go/src/path/filepath/path.go:518 +0x50
github.com/wolfi-dev/wolfictl/pkg/melange.ReadAllPackagesFromRepo({0x7ffe8439b1c3, 0x3})
	/home/strongjz/Documents/code/go/src/github.com/wolfi/wolfictl/pkg/melange/melange.go:70 +0xc8
github.com/wolfi-dev/wolfictl/pkg/lint.(*Linter).Lint(0xc0009a27d0)
	/home/strongjz/Documents/code/go/src/github.com/wolfi/wolfictl/pkg/lint/linter.go:49 +0x67
github.com/wolfi-dev/wolfictl/pkg/cli.lintOptions.LintCmd({{0xc0002b22d0, 0x1, 0x1}, 0x0, 0x0, {0x409b470, 0x0, 0x0}})
	/home/strongjz/Documents/code/go/src/github.com/wolfi/wolfictl/pkg/cli/lint.go:48 +0xac
github.com/wolfi-dev/wolfictl/pkg/cli.Lint.func1(0xc00099cc00?, {0xc0002b22d0?, 0x1?, 0x1?})
	/home/strongjz/Documents/code/go/src/github.com/wolfi/wolfictl/pkg/cli/lint.go:29 +0x95
github.com/spf13/cobra.(*Command).execute(0xc00099cc00, {0xc0002b2290, 0x1, 0x1})
	/home/strongjz/Documents/code/go/pkg/mod/github.com/spf13/[email protected]/command.go:916 +0x862
github.com/spf13/cobra.(*Command).ExecuteC(0xc00099c300)
	/home/strongjz/Documents/code/go/pkg/mod/github.com/spf13/[email protected]/command.go:1044 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
	/home/strongjz/Documents/code/go/pkg/mod/github.com/spf13/[email protected]/command.go:968
main.main()
	/home/strongjz/Documents/code/go/src/github.com/wolfi/wolfictl/main.go:10 +0x1e

Create GitHub Action for wolfictl

Description

I think it would be nice to have a GitHub Action for wolfictl. I strongly believe that there will be a bunch of new subcommands in the wolfictl and people may want to use these in their pipelines in the near future. I can't list the use cases, but requirements will show the path.

An example use case: wolfi-dev/os#278

Dropping the idea here for further discussing. Would it be too early to do this?

advisory discover: handle version streams correctly

Today the wolfictl advisory discover command is looking up vulnerabilities for each package definition.

But since we have the concept of "version streams", we have have a group of multiple package definitions that refer to the same package, just different versions. In this case, we should not be issuing a request to NVD for each of these definitions (e.g. one search for go-1.19, one for go-1.20, etc.), both because the requests would be redundant, and because the version stream names are less likely to result in CPE matches (i.e. causing false negatives).

We should issue one request per "real software package" (i.e. the deduplication of a group of version streams), and then use version data for each version stream as we filter NVD's response data for relevant vulnerability matches.

cmd/advisory: sign the output file during export to verify later from consumers

Description

In the advisories repo, we currently does not sign the security.json artifact 1 that generated in build-and-publish-secdb.yaml action. This file is exists to be consumed by scanner DB pipelines.

The idea is to generate signed-output so that consumers (i.e., Trivy, Grype) would verify it later on. (By adding support for that.)

Dropping the idea here so we don't forget!

/cc @luhring @developer-guy

Footnotes

  1. https://github.com/wolfi-dev/advisories/blob/d9c3b43ed002e3027779cca9caa4084a1f7ec69e/.github/workflows/build-and-publish-secdb.yaml#L43

Extend wolfictl lint to take a config file that allows customisations of certain rules, specific to a repo

The rules defined in the wolfictl lint are great. I wonder if could provide a way to use it on more repos with slightly different customisations of the rules.

I.e. a different repo may forbid a different set of repos + keyrings.

forbiddenRepositories = []string{
"https://packages.wolfi.dev/os",
}
forbiddenKeyrings = []string{
"https://packages.wolfi.dev/os/wolfi-signing.rsa.pub",
}
)

For example rather than wolfi os we could forbid the wolfi bootstapping repo and keyring?

Use CPE dictionary to improve recall on NVD API detector

Context

Today, we search NVD for CVEs by constructing a CPE per package, and using that CPE for CVE lookups. The approach to this CPE generation was taken from Alpine's secfixes-tracker project, and extended slightly in some areas. Today the CPE generation code is here: https://github.com/wolfi-dev/wolfictl/blob/main/pkg/vuln/nvdapi/detector.go#L331-L440

This code helps the wolfictl adv discover command find new CVEs for our packages. While the precision of today's implementation is high, the recall is unverified and probably on the low side.

Idea

We could probably greatly improve recall by examining NVD's CPE Dictionary and trying to find dictionary entries that correspond to Wolfi packages. We could treat any hits in the dictionary as authoritative CPEs and avoid generating our own CPEs in that case.

We could extend today's CPE approach by manually reviewing the CPE dictionary, and that alone would be a win.

Bonus: It would be even cooler if we can automate this dictionary lookup, either by embedding some form of the dictionary in wolfictl, or by having wolfictl fetch and parse the dictionary at runtime.

What is the license of the wolfi and chainguard secdb?

I could not find any license information for the secdb data for wolfi and chainguard.
Can you clarify what would be the license?
These are the data published at:

I need a license to integrate this in https://github.com/nexb/vulnerablecode

For reference, the Alpine secdb has a license at https://secdb.alpinelinux.org/license.txt
Something similar would be awesome!
Thanks

PS: I am not sure if this issue should be filed only here, or at https://github.com/chainguard-dev/vulnerability-scanner-support/ or should be split in two? Please advise!

Introduce a new `vet` subcommand

Description

Idea is to create a new vet (or something else) subcommand to run some vetting pipeline for the given apko or melange manifest just before sending to PR. Motivation is to boost local development productivity so we don't waste time on the CI by waiting. Moreover, this would be a CLI version of Wolfi workflows.

Example Usage:

wolfictl vet my-melange-manifest.yaml

We (w/ @developer-guy) thought that we can introduce a new vet subcommand that can do:

  • Identify the given manifest (whether its melange or apko)

  • Run format check: wolfictl lint yam

  • Run lint check: wolfictl lint

  • Run update check: wolfictl check update

  • Run melange pipeline: (optional)

    • melange keygen, if its first run or keys does not exist
    • Check if all packages are exist on the Wolfi repo
    •  Run melange build with args/flags using Docker/Lima/etc
    • Export generated .apk to temp dir
      • Run CVE scans with Grype/Trivy
  • For apko pipeline: (optional)

    • Check if all packages are exist on the Wolfi repo
    • Run terraform fmt
    • Run apko build
    • Run CVE scans with Grype/Trivy

Advisory: Simplify the advisory creation flow

Background

The flow of advisory data is increasingly becoming automated (example). Most of the manual changes are the results of the creation of a new package. However, instructions for creating new advisories are unclear, and the creation of advisories is very toilsome.

Problem to solve

Any new delivery specialist should be able to file correct advisories with minimal time that would result in the change being merged without further comment. Packagers shouldn't have to learn the ins and outs of all the advisory status. Packages shouldn't have to memorize a mental flowchart to decide the appropriate status.

Proposal

Create a new wolfictl command that would:

  • Accepts the name of a new package
  • Scan the main package and all sub-packages
  • For each vulnerability found, ask questions to determine what to enter in the advisory. At the end of the flow, the content of the advisory file should contain accurate information.

make wolfictl go installable

make wolfictl go installable

go install github.com/wolfi-dev/wolfictl@latest
go: github.com/wolfi-dev/wolfictl@latest (in github.com/wolfi-dev/[email protected]):
        The go.mod file for the module providing named packages contains one or
        more replace directives. It must not contain directives that would cause
        it to be interpreted differently than if it were the main module.

Enable / Debug dependabot

Description

It was not immediately clear why some of the changes in melange made ~week ago were not showing up in the CI pipeline that was causing then some unexpected errors. Thanks to @joshrwolf debugging and fixing it here:
#396

We should be getting a more up-to-date melange. This is especially important if new pipelines are added, like here:
chainguard-dev/melange#679

My expectation was that this would surface ~next day after all the propagations to dependabots, and digestabots, etc. and that clearly didn't happen here. Seems like if we could remove the manual step above, we'd be well on our way to not having to remember to manually bump things.

@cpanato would you mind taking a look at this and see if this could be sorted?

wolfictl repo: subcommand for working with APK repositories

Propose wolfictl repo which helps with working with APK repositories

Example - generate a static APK repo:

wolfictl repo generate <package...> [--out-dir=<dir>]

this would generate a repo at <dir> (default: ./apk-repo/):

./apk-repo/APKINDEX.tar.gz
./apk-repo/packages/x86_64/brotli-1.0.9-r0.apk
./apk-repo/packages/x86_64/autoconf-2.71-r0.apk
./apk-repo/packages/x86_64/build-base-1-r3.apk
./apk-repo/packages/x86_64/busybox-1.35.0-r3.apk
./apk-repo/packages/x86_64/ca-certificates-bundle-20220614-r2.apk
...

Then we can enable people to host their own APK repos, by simply uploading this directory to their static host of choice. Similar to some of the things done here cc @cpanato https://github.com/helm/chart-releaser

DAG resolveCycle is incomplete and can fail

The current method to resolve a cycle, found here, uses the following logic:

  1. I found a cycle when adding A -> B
  2. Find the shortest path from B -> A (i.e. shortest path that is causing the cycle)
  3. Remove the last link (edge) in the shortest path
  4. Add A -> B, which now should work
  5. Add back that removed edge, which now should see that it cannot do what it did before, and resolve to a different dependency

The above works kind of well, but only when two things are true:

  • There is only one dependency B -> A; if there are multiple, you are stuck
  • There is a different way to resolve that last removed link, normally something in upstream; if not, you are stuck

The resolveCycle() thus is making some assumptions about how to resolve the dependencies that are not necessarily true.

The current algorithm is improved over a previous one, but still not generic enough. We need a more generic one.

For a practical example, see the below diagram (source graphviz lower down). We want to add patch:2.7.6-r5@local -> autoconf. Unfortunately, autoconf exists only locally, not upstream in bootstrap, so the only thing it can depend upon is autoconf;2.71-r2@local. Except that autoconf;2.71-r2@local already depends on patch:2.7.6-r5@local, not directly, but via multiple paths.

When it tries to resolve, it finds the shortest path of autoconf:2.71-r2@local -> busybox:1.36.1-r0@local -> patch:2.7.6-r5@local . It removes the busybox -> patch dependency, tries again... and fails, because another one exists binutils -> patch.

This does not have to be limited to 1 or 2; it could be dozens.

We need a better generic algorithm.

graph

digraph dependency_tree {
    node [shape=box, fontcolor=black];
    "autoconf:2.71-r2@local" [color=red, fontcolor=red];
    "binutils:2.40-r3@local" [color=red, fontcolor=red];
    "patch:2.7.6-r5@local" [color=red, fontcolor=red];
    "build-base:1-r5@local";
    "busybox:1.36.1-r0@local" [color=red, fontcolor=red];
    "ca-certificates-bundle:20230506-r0@local";
    "m4:1.4.19-r4@local" [color=red, fontcolor=red];
    "perl:5.36.1-r0@local";
    "scanelf:1.3.7-r0@local";
    "make:4.3-r3@local";
    "wget:1.21.4-r0@local";
    "binutils:2.39-r1@https://packages.wolfi.dev/bootstrap/stage3/x86_64";

    "autoconf:2.71-r2@local" -> "binutils:2.40-r3@local" [color=red];
    "binutils:2.40-r3@local" -> "patch:2.7.6-r5@local" [color=red];
    "m4:1.4.19-r4@local" -> "binutils:2.40-r3@local" [color=red];
    "busybox:1.36.1-r0@local" -> "patch:2.7.6-r5@local" [color=red];
    "autoconf:2.71-r2@local" -> "build-base:1-r5@local";
    "autoconf:2.71-r2@local" -> "busybox:1.36.1-r0@local" [color=red];
    "autoconf:2.71-r2@local" -> "ca-certificates-bundle:20230506-r0@local";
    "autoconf:2.71-r2@local" -> "m4:1.4.19-r4@local" [color=red];
    "autoconf:2.71-r2@local" -> "perl:5.36.1-r0@local";
    "autoconf:2.71-r2@local" -> "scanelf:1.3.7-r0@local";
    "autoconf:2.71-r2@local" -> "make:4.3-r3@local";
    "autoconf:2.71-r2@local" -> "wget:1.21.4-r0@local";
    "build-base:1-r5@local" -> "busybox:1.36.1-r0@local";
    "build-base:1-r5@local" -> "binutils:2.39-r1@https://packages.wolfi.dev/bootstrap/stage3/x86_64";
    "make:4.3-r3@local" -> "busybox:1.36.1-r0@local";
    "make:4.3-r3@local" -> "binutils:2.39-r1@https://packages.wolfi.dev/bootstrap/stage3/x86_64";
    "busybox:1.36.1-r0@local" -> "binutils:2.39-r1@https://packages.wolfi.dev/bootstrap/stage3/x86_64";
}

Use `REGEX` instead of `strings.Containes` in `tag-filter`

In a current change #458, tag-filters doesn't match only with the START of the string value. And it is useful for something like openjdk*-ga releases.

But the problem is with some of the current wolfi packages that point to a specific version.
For example:
bazel-3, bazel-5 etc wolfi-dev/os#8094
cython-0 wolfi-dev/os#8093

Though we have ignore-regex-patterns now as mentioned in the previous PR. Adding negation regex is relatively more complex and ERROR prone.
e.g: for bazel-3 we need to add

ignore-regex-patterns:
    - '^(?!3\.).*'

Solution:

instead of strings.Containes we can use regex.

	// the github graphql query filter matches any occurrence of the tag filter
	if ghm.TagFilter != "" {
		regex, err := regexp.Compile(ghm.TagFilter)
		if err != nil {
			return "", errors.Wrapf(err, "failed to compile regex %s", ghm.TagFilter)
		}
		if !regex.MatchString(v) {
			return "", nil
		}
	}
  • this will not change the current functionality, because without special chars like '^' or '$', it will work as same as strings.Containes function.
  • It will reduce the complexity of writing negative regex(more complex). And we can just change the tag filter from "3." to "^3." in bazel-3, cython-0 and any other necessary.

/cc @rawlingsj

lint: add new lint to check if `uri` contains any hard-coded digests

Description

melange convert python <PACKAGE> generated melange manifest, sometimes contains the digest in the uri:

  - uses: fetch
    with:
      expected-sha256: 942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1
      uri: https://files.pythonhosted.org/packages/9d/be/10918a2eac4ae9f02f6cfe6414b7a155ccd8f7f9d4380d62fd5b955065c3/requests-${{package.version}}.tar.gz

So that contains a SHA means that the Wolfi bot won't be able to auto-update the package.

AFAICSC, the following packages has affected:

  • py3-aiohttp
  • py3-aiosignal
  • py3-async-timeout
  • py3-asynctest
  • py3-attrs
  • py3-frozenlist
  • py3-idna-ssl
  • py3-idna
  • py3-llhttp
  • py3-multidict
  • py3-openai
  • py3-pyperclip
  • py3-requests
  • py3-ruamel-yaml
  • py3-tqdm
  • py3-typing
  • py3-yarl

cc @rawlingsj

wolfictl dot does not seem to work, or needs some more docs

Description

The dot command does not seem to work as advertised here: https://github.com/wolfi-dev/wolfictl/blob/main/docs/cmd/wolfictl_dot.md

vaikas@vaikas-mbp os %  wolfictl dot
Error: unable to resolve needs for package ko-fips: unable to load pipeline: open pipelines/go-fips/build.yaml: file does not exist
FATA[0000] error during command execution: unable to resolve needs for package ko-fips: unable to load pipeline: open pipelines/go-fips/build.yaml: file does not exist
vaikas@vaikas-mbp os % ls -l pipelines/go-fips/build.yaml
-rw-r--r--  1 vaikas  staff  2446 Aug 11 11:56 pipelines/go-fips/build.yaml
vaikas@vaikas-mbp os % pwd
/Users/vaikas/projects/go/src/github.com/wolfi-dev/os

It's entirely possibly if not likely that I'm holding this wrong.

testing: when using multiple yaml updaters on a file, the content is appended per update

Description

When using two updaters on a file on my unit tests, I got both changes appended (the whole doc is appended at the end) instead of replaced into the specified section in the file or yaml root.

I could only reproduce this issue when using the testing fwk. When I use my code on a file (real testing, not using the testing fwk), it works 'fine'.

Here I called over the same index file the packageUpdater and the pipelineUpdater in my unit test:

	pipelineSectionUpdater := NewPipelineSectionUpdater(func(cfg config.Configuration) ([]config.Pipeline, error) {
		pipes := cfg.Pipeline
		pipes[1].With["deps"] = "golang/[email protected] k8s.io/[email protected]"
		return pipes, nil
	})

	packageSectionUpdater := NewPackageSectionUpdater(func(cfg config.Configuration) (config.Package, error) {
		p := cfg.Package
		p.Epoch++
		return p, nil
	})

	s := index.Select().WhereName("blah")
	err = s.Update(packageSectionUpdater)
	require.NoError(t, err)
	err = s.Update(pipelineSectionUpdater)
	require.NoError(t, err)

	if diff := fsys.DiffAll(); diff != "" {
		t.Errorf("unexpected file modification results (-want, +got):\n%s", diff)
	}

This produced the following output:

package:
  name: blah
  version: "7"
  epoch: 1
pipeline:
  - uses: fetch
    with:
      expected-sha256: 060309d7a333d38d951bc27598c677af1796934dbd98e1024e7ad8de798fedda
      uri: https://github.com/lathiat/avahi/releases/download/v${{package.version}}/avahi-${{package.version}}.tar.gz
  - uses: go/bump
    with:
      deps: github.com/x/[email protected]
  - uses: patch
    with:
      patches: CVE-2021-3468.patch
subpackages:
  - name: alsa-lib-dev
    pipeline:
      - uses: split/dev
    dependencies:
      runtime:
        - alsa-lib
# Generated by
package:
  name: blah
  version: "7"
  epoch: 1
pipeline:
  - uses: fetch
    with:
      expected-sha256: 060309d7a333d38d951bc27598c677af1796934dbd98e1024e7ad8de798fedda
      uri: https://github.com/lathiat/avahi/releases/download/v${{package.version}}/avahi-${{package.version}}.tar.gz
  - uses: go/bump
    with:
      deps: golang/[email protected] k8s.io/[email protected]
  - uses: patch
    with:
      patches: CVE-2021-3468.patch
subpackages:
  - name: alsa-lib-dev
    pipeline:
      - uses: split/dev
    dependencies:
      runtime:
        - alsa-lib

This is independent of the type of updaters. You can run s.Update(packageUpdater) twice and you'll get the result appended instead of just replacing the section package. When I debugged the code I found the yaml section replacement is done correctly so there must be somewhere else the error.

scan: warn when remote scan gets different APK versions for different architectures

Example:

$ wolfictl scan -r cassandra-reaper
📡 Finding remote packages
🔎 Scanning "/var/folders/kl/q9mydw095ln5s7wj971qcrx40000gn/T/x86_64-cassandra-reaper-3.6.0-r0-4249004165.apk"
...
...

🔎 Scanning "/var/folders/kl/q9mydw095ln5s7wj971qcrx40000gn/T/aarch64-cassandra-reaper-3.4.0-r2-32740106.apk"
...

This will help us realize when a difference in vulnerability results between architectures may have a deeper root cause, such as that we've stopped building a certain architecture.

Remaining perf work for `wolfictl text`

Description

I'm going to stop digging into this, but I wanted to write down what I've found before dropping all context:

Once all the PRs I've sent are merged and those dependencies get bumped, wolfictl text ~400ms instead of ~250s (on my machine, at least).

What's left looks like this:

image

The biggest chunk in the middle is calling BuildFlavor, from here:
https://github.com/chainguard-dev/melange/blob/a3b7a002e874b75c318e5d2a2c3c7af7142f456c/pkg/build/pipeline.go#L97-L98

We end up Stating (I think) an empty directory a lot, so fixing that would shave off ~50ms.

There's also a ton of redundant reading of files and parsing of yamls and detecting of commits. I added some logging to see what's going on:

   7 loadUse.ReadFile("pipelines/cmake/install.yaml")
   8 loadUse.ReadFile("pipelines/cmake/build.yaml")
   8 loadUse.ReadFile("pipelines/cmake/configure.yaml")
  15 loadUse.ReadFile("pipelines/go/install.yaml")
  17 loadUse.ReadFile("pipelines/meson/configure.yaml")
  19 loadUse.ReadFile("pipelines/meson/compile.yaml")
  20 loadUse.ReadFile("pipelines/meson/install.yaml")
  31 loadUse.ReadFile("pipelines/ruby/clean.yaml")
  32 loadUse.ReadFile("pipelines/ruby/build.yaml")
  32 loadUse.ReadFile("pipelines/ruby/install.yaml")
  45 loadUse.ReadFile("pipelines/go/build.yaml")
 113 loadUse.ReadFile("pipelines/patch.yaml")
 166 loadUse.ReadFile("pipelines/autoconf/configure.yaml")
 185 loadUse.ReadFile("pipelines/git-checkout.yaml")
 253 loadUse.ReadFile("pipelines/autoconf/make-install.yaml")
 262 loadUse.ReadFile("pipelines/autoconf/make.yaml")
 495 loadUse.ReadFile("pipelines/strip.yaml")
 526 loadUse.ReadFile("pipelines/fetch.yaml")
 687 detectCommit(".")
4576 Stat("lib/libc.so.6")
4576 Stat("lib64/libc.so.6")

I'm a little hesitant to start refactoring things too much, but it seems like it would be straightforward to cache this stuff.

There are also a bunch of little things:

We spend a ton of time in https://github.com/chainguard-dev/go-apk/blob/2829525a71369b8c570b98332bb73639b1e59802/pkg/apk/version.go#L371 doing regex stuff. If it's possible to do this in a non-regexy way, we could save some time.

All of the string manipulation in here could also be a lot faster: https://gitlab.alpinelinux.org/alpine/go/-/blob/master/repository/repository.go

E.g. IndexUri(), NewRepositoryFromComponents, and Url should all be using path.Join instead of fmt.Sprintf.

The Packages() method should initialize pkgs with Count() capacity.

I suspect we could get this down to sub-100ms if we fix everything.

wolfictl scan should initialize the grype DB once

Description

This line is particularly expensive:

datastore, dbStatus, dbCloser, err := grype.LoadVulnerabilityDB(grypeDBConfig, updateDB)

It's actually twice as expensive as it could be: anchore/grype#1502

But the main problem is that we call it for every package we're scanning instead of doing it once per wolfictl invocation.

As a quick win, I'm going to drop the ValidateByHashOnGet from grypeDBConfig, but that's a little janky.

An example of this is that scanning all the glibc and busybox packages took ~15 minutes in CI: https://github.com/wolfi-dev/os/actions/runs/7199389272/job/19611327817?pr=9871

advisory create: unable to load APKINDEX for x86_64: opening "": open : no such file or directory

Description

I'm sure I'm holding this tool incorrectly, but I tried to guess how to use wolfictl advisory create in lieu of more elaborate documentation. Here's what I ran:

wolfictl advisory create -p openssl -s fixed -V CVE-2023-2650 --fixed-version=3.1.1-r0 -a . -d ../os

Here was the output:

FATA[0000] error during command execution: unable to load APKINDEX for x86_64: opening "": open : no such file or directory

My current working directory was a clean fork of the advisories repo. The error message is confusing, but looking at the strace it was trying to do something with

[pid 61601] fcntl(215, F_GETFL)         = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 61601] fcntl(215, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
[pid 61601] epoll_ctl(3, EPOLL_CTL_ADD, 215, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3472783160, u64=139997931796280}}) = -1 EPERM (Operation not permitted)
[pid 61601] fcntl(215, F_GETFL)         = 0x8800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE)
[pid 61601] fcntl(215, F_SETFL, O_RDONLY|O_LARGEFILE) = 0
[pid 61601] read(215, "package:\n  name: zot\n  version: "..., 512) = 512
[pid 61601] read(215, ".version}}\n      destination: zo"..., 512) = 509
[pid 61601] read(215, "", 512)          = 0
[pid 61601] newfstatat(AT_FDCWD, ".", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git/HEAD", {st_mode=S_IFREG|0644, st_size=21, ...}, 0) = 0
[pid 61601] openat(AT_FDCWD, "/home/t/src/advisories/.git/HEAD", O_RDONLY|O_CLOEXEC) = 216
[pid 61601] fcntl(216, F_GETFL)         = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 61601] fcntl(216, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
[pid 61601] epoll_ctl(3, EPOLL_CTL_ADD, 216, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3472783160, u64=139997931796280}}) = -1 EPERM (Operation not permitted)
[pid 61601] fcntl(216, F_GETFL)         = 0x8800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE)
[pid 61601] fcntl(216, F_SETFL, O_RDONLY|O_LARGEFILE) = 0
[pid 61601] read(216, "ref: refs/heads/main\n", 512) = 21
[pid 61601] read(216, "", 491)          = 0
[pid 61601] close(216)                  = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git/HEAD", {st_mode=S_IFREG|0644, st_size=21, ...}, 0) = 0
[pid 61601] openat(AT_FDCWD, "/home/t/src/advisories/.git/HEAD", O_RDONLY|O_CLOEXEC) = 216
[pid 61601] fcntl(216, F_GETFL)         = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 61601] fcntl(216, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
[pid 61601] epoll_ctl(3, EPOLL_CTL_ADD, 216, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3472783160, u64=139997931796280}}) = -1 EPERM (Operation not permitted)
[pid 61601] fcntl(216, F_GETFL)         = 0x8800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE)
[pid 61601] fcntl(216, F_SETFL, O_RDONLY|O_LARGEFILE) = 0
[pid 61601] read(216, "ref: refs/heads/main\n", 512) = 21
[pid 61601] read(216, "", 491)          = 0
[pid 61601] close(216)                  = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git/refs/heads/main", {st_mode=S_IFREG|0644, st_size=41, ...}, 0) = 0
[pid 61601] openat(AT_FDCWD, "/home/t/src/advisories/.git/refs/heads/main", O_RDONLY|O_CLOEXEC) = 216
[pid 61601] fcntl(216, F_GETFL)         = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 61601] fcntl(216, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
[pid 61601] epoll_ctl(3, EPOLL_CTL_ADD, 216, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3472783160, u64=139997931796280}}) = -1 EPERM (Operation not permitted)
[pid 61601] fcntl(216, F_GETFL)         = 0x8800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE)
[pid 61601] fcntl(216, F_SETFL, O_RDONLY|O_LARGEFILE) = 0
[pid 61601] read(216, "a45fef131bf4ca7c833397ffd4e7d915"..., 512) = 41
[pid 61601] read(216, "", 471)          = 0
[pid 61601] close(216)                  = 0
[pid 61601] openat(AT_FDCWD, "../os/zstd.yaml", O_RDONLY|O_CLOEXEC) = 216
[pid 61601] fcntl(216, F_GETFL)         = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 61601] fcntl(216, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
[pid 61601] epoll_ctl(3, EPOLL_CTL_ADD, 216, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3472783160, u64=139997931796280}}) = -1 EPERM (Operation not permitted)
[pid 61601] fcntl(216, F_GETFL)         = 0x8800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE)
[pid 61601] fcntl(216, F_SETFL, O_RDONLY|O_LARGEFILE) = 0
[pid 61601] read(216, "package:\n  name: zstd\n  version:"..., 512) = 512
[pid 61601] read(216, "e1\n\n  - runs: |\n      make -j$(n"..., 512) = 512
[pid 61601] read(216, "    mkdir -p \"${{targets.subpkgd"..., 512) = 203
[pid 61601] read(216, "", 512)          = 0
[pid 61601] openat(AT_FDCWD, "../os/zstd.yaml", O_RDONLY|O_CLOEXEC) = 217
[pid 61601] fcntl(217, F_GETFL)         = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 61601] fcntl(217, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
[pid 61601] epoll_ctl(3, EPOLL_CTL_ADD, 217, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3472783160, u64=139997931796280}}) = -1 EPERM (Operation not permitted)
[pid 61601] fcntl(217, F_GETFL)         = 0x8800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE)
[pid 61601] fcntl(217, F_SETFL, O_RDONLY|O_LARGEFILE) = 0
[pid 61601] read(217, "package:\n  name: zstd\n  version:"..., 512) = 512
[pid 61601] read(217, "e1\n\n  - runs: |\n      make -j$(n"..., 512) = 512
[pid 61601] read(217, "    mkdir -p \"${{targets.subpkgd"..., 512) = 203
[pid 61601] read(217, "", 512)          = 0
[pid 61601] newfstatat(AT_FDCWD, ".", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git", {st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git/HEAD", {st_mode=S_IFREG|0644, st_size=21, ...}, 0) = 0
[pid 61601] openat(AT_FDCWD, "/home/t/src/advisories/.git/HEAD", O_RDONLY|O_CLOEXEC) = 218
[pid 61601] fcntl(218, F_GETFL)         = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 61601] fcntl(218, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
[pid 61601] epoll_ctl(3, EPOLL_CTL_ADD, 218, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3472783160, u64=139997931796280}}) = -1 EPERM (Operation not permitted)
[pid 61601] fcntl(218, F_GETFL)         = 0x8800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE)
[pid 61601] fcntl(218, F_SETFL, O_RDONLY|O_LARGEFILE) = 0
[pid 61601] read(218, "ref: refs/heads/main\n", 512) = 21
[pid 61601] read(218, "", 491)          = 0
[pid 61601] close(218)                  = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git/HEAD", {st_mode=S_IFREG|0644, st_size=21, ...}, 0) = 0
[pid 61601] openat(AT_FDCWD, "/home/t/src/advisories/.git/HEAD", O_RDONLY|O_CLOEXEC) = 218
[pid 61601] fcntl(218, F_GETFL)         = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 61601] fcntl(218, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
[pid 61601] epoll_ctl(3, EPOLL_CTL_ADD, 218, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3472783160, u64=139997931796280}}) = -1 EPERM (Operation not permitted)
[pid 61601] fcntl(218, F_GETFL)         = 0x8800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE)
[pid 61601] fcntl(218, F_SETFL, O_RDONLY|O_LARGEFILE) = 0
[pid 61601] read(218, "ref: refs/heads/main\n", 512) = 21
[pid 61601] read(218, "", 491)          = 0
[pid 61601] close(218)                  = 0
[pid 61601] newfstatat(AT_FDCWD, "/home/t/src/advisories/.git/refs/heads/main", {st_mode=S_IFREG|0644, st_size=41, ...}, 0) = 0
[pid 61601] openat(AT_FDCWD, "/home/t/src/advisories/.git/refs/heads/main", O_RDONLY|O_CLOEXEC) = 218
[pid 61601] fcntl(218, F_GETFL)         = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 61601] fcntl(218, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 0
[pid 61601] epoll_ctl(3, EPOLL_CTL_ADD, 218, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3472783160, u64=139997931796280}}) = -1 EPERM (Operation not permitted)
[pid 61601] fcntl(218, F_GETFL)         = 0x8800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE)
[pid 61601] fcntl(218, F_SETFL, O_RDONLY|O_LARGEFILE) = 0
[pid 61601] read(218, "a45fef131bf4ca7c833397ffd4e7d915"..., 512) = 41
[pid 61601] read(218, "", 471)          = 0
[pid 61601] close(218)                  = 0
[pid 61601] openat(AT_FDCWD, "", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
[pid 61601] ioctl(2, TCGETS, {c_iflag=ICRNL|IUTF8, c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD, c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE, ...}) = 0
[pid 61601] write(2, "\33[31mFATA\33[0m[0002] error during"..., 134FATA[0002] error during command execution: unable to load APKINDEX for x86_64: opening "": open : no such file or directory 
) = 134
[pid 61601] exit_group(1 <unfinished ...>

The update check is failing pipeline runs unexpectedly

@imjasonh noticed output like this:

2023/03/31 17:10:58 error during command execution: 9 errors occurred:
	* failed to create a version from package python-3.12: 3.12.0_alpha6: Malformed version: 3.12.0_alpha6
	* failed to create a version from package openssh: 9.3_p1.  Error: Malformed version: 9.3_p1
	* package llvm-lld: update found newer version 16.0.0 compared with package.version in melange config
	* package py3.11-installer: update found newer version 0.7.0 compared with package.version in melange config
	* package libsm: update found newer version 1.2.4 compared with package.version in melange config
	* package clang-15: update found newer version 15.0.7 compared with package.version in melange config
	* package libpaper: update found newer version 2.0.10 compared with package.version in melange config
	* package py3-more-itertools: update found newer version 9.1.1 compared with package.version in melange config
	* package py3.10-installer: update found newer version 0.7.0 compared with package.version in melange config

There might be 2 underlying problems here:

  1. The failed to create a version from package — this appears to be an error from hashicorp/go-version bubbling up when it shouldn't be.
  2. We shouldn't have "newer updates available" be a concern during lint checks (except possibly as a warning)

sbom/scan commands: synthesized apk package should include file data

Description

Syft/Grype have a new configuration option ExcludeBinaryOverlapByOwnership, which removes "binary packages" from the SBOM when the binary is claimed by the distro package.

To produce results with parody to Syft and Grype, the synthesized APK package created in wolfictl's sbom.Generate function should account for the APK's included files, in a manner consistent with the data recorded in an APK installed DB, which will enable the ExcludeBinaryOverlapByOwnership config option to have the same effect in wolfictl's output.

Docs: Make Clear That Go 1.20 is Expected

Description

When I cloned and then installed wolfictl, I ran into an error because, I think, the go version on my machine was 1.19, less than 1.20. Is this expected? If so, should the README note that the Go version is expected to be 1.20? Apologies if this is an inane question. I don't consider myself competent in Go.

I ran this:

➜  public git clone [email protected]:wolfi-dev/wolfictl.git wolfictl && cd $_

Then this:

➜  wolfictl git:(main) go install`
...
go: downloading modernc.org/memory v1.5.0
go: downloading github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec
# github.com/chainguard-dev/go-apk/pkg/apk
../../../../go/pkg/mod/github.com/chainguard-dev/[email protected]/pkg/apk/expandapk.go:89:16: undefined: errors.Join
note: module requires Go 1.20

note: module requires Go 1.20 seemed to indicate that I needed Go 1.20, at least.

Discover advisories by querying NVD directly

Today wolfictl advisory discover queries the secfixes service, which periodically queries NVD and our APKINDEX and generates a set of packages in the index that have matching vulns.

Instead of relying on this service, wolfictl advisory discover can query these sources and emit findings itself directly. The benefit of this is that we'd effectively rewrite the core of the secfixes service in Go (it's in Python now), and be able to move it around, possibly back up into a service in the future, possibly with notifications.

lint: add new lint rule to enforce version is pinned in `package.name` (if `package.version` specified)

Description

Imagine we want to build the following package:

package:
  name: foo
  version: 1.2.3

We should not allow this since there could be already package named foo that use the latest. Instead it'd be nice to pass if -major.minor is set as suffix.

Valid:

package:
  name: foo-1
  version: 1.2.3
package:
  name: foo-1.2
  version: 1.2.3
package:
  name: foo-1.2.3
  version: 1.2.3

Not valid: (Version mismatch)

package:
  name: foo-2.1
  version: 1.2.3

Lint: Add Check for `pip install`

pip install should not be used in melange YAML files when building Python packages for Wolfi. Why? Because APKs for Python in Wolfi are meant to contain one and only one Python package. Using pip install in a Python package adds more than one Python package to an APK. Why only one Python package per APK package? This, to my knowledge, is to make a "better" SBOM (more complete) and to make vulnerability remediation easier.

Unfortunately:

➜  os git:(main) date
Wed Oct  4 05:16:53 EDT 2023
➜  os git:(main) git rev-parse --short HEAD   
6f138fe7
➜  os git:(main) pwd
~/Desktop/repos/public/os
➜  os git:(main) grep -r "pip install" | wc -l
      39

There are at least 39 instances currently of pip install across Wolfi packages. After these are fixed, I propose adding a lint to detect usage of pip install in a melange YAML file. It would be fine to add it earlier too, as long as there is a way to disable that particular check during CI (until these instances of pip install are removed).

cc @luhring @kaniini

h/t @luhring -- This is really just reporting his finding!

ref
wolfi-dev/os#6244

Proposal: Introduce a new `init` subcommand to enhange developer experience for adding new packages (with ChatGPT)

With @developer-guy, we thought an idea to enhance overall developer productivity when adding a brand-new packages. Find the proposal below.


Abstract

Introduce a new init sub-command to significantly improve the developer experience for all the contributors by creating automated package templates for both Chainguard Image and Wolfi Package.

Motivation

Currently, creating a new package from scratch is time-consuming since developers have to find-out for information (license, version, descrption, etc.) of the packages from the upstream repositories. There could be an automated way to tackle this problem. We can save tons of hours!

Implementation

Option 1 (interactive)

After running wolfictl init, you can choose one of the following templates:

  • melange: Suitable for a Wolfi package by creating melange.yaml.
  • apko: Suitable for a Chainguard Image by creating apko.yaml.

End user should use arrow to move.

  1. Use the GitHub API to fetch all the required information from the repository to use as default values
  2. Ask the prompts: Name:, Description, License, etc. (with default values)
  3. As soon as we gathered all the needs, ask a final [Y/n] question to initialize the package template.
  4. Enlight the way of the developer by printing very descriptive logs. (directory it was created, what to do next, etc.)
1: If melange selected
  1. Create a <NAME>.yaml with following template:
  2. Optional: Fetch APKBUILD file by searhing https://git.alpinelinux.org/ (or https://search.nixos.org/packages to get propagatedBuildInputs) and call OpenAI API (by using ChatGPT) to read and convert the file content to melange corresponding (if possible)
  3. Optional: If it's Go image, run Grype for the image and append go get commands to mitigate CVEs in runs pipeline
  4. Include the package in packages.txt (at random line to prevent conflict)
package:
  name: NAME
  version: LAST VERSION
  epoch: 0
  description: GITHUB DESCRIPTION
  copyright:
    - license: GITHUB LICENSE

environment:
  contents:
    packages:
      - wolfi-base
      - busybox
      - ca-certificates-bundle
      - build-base

pipeline:
  - uses: fetch
    with:
      uri: UPSTREAM REPO/${{package.version}}.tar.gz
      expected-sha256: EXPECTED SHA
      expected-commit: EXPECTED COMMIT

  - runs: |
      # TODO

  - uses: strip

update:
  enabled: SET TRUE IF ACTIVELY MAINTAINED REPO (IF LAST COMMIT DATE < 1y)
  github:
    identifier: REPO/PROJECT
    use-tag: true
    tag-filter: "TAG"
2: If apko selected
  1. Init a new package tree:
PACKAGE NAME
├──configs 
│  └──latest.apko.yaml 
├──image.yaml 
├──README.md 
└──tests 
   ├──01-runs.sh 
  1. In the directory, init image.yaml and README.md the the following templates:
status: experimental
versions:
  - apko:
      config: configs/latest.apko.yaml
      extractTagsFrom:
        package: PACKAGE
      subvariants:
        - suffix: -dev
          options:
            - dev
  1. To fill configs/latest.apko.yaml file, we can get the wolfi/os package corresponding if exist. (if doesn't we can error out: "Wolfi package is not exist. Create it first to initialize a new apko file):
contents:
  repositories:
    - https://packages.wolfi.dev/os
  keyring:
    - https://packages.wolfi.dev/os/wolfi-signing.rsa.pub
  packages:
    - ca-certificates-bundle
    - wolfi-baselayout
    - PACKAGE NAME

accounts:
  groups:
    - groupname: nonroot
      gid: 65532
  users:
    - username: nonroot
      uid: 65532
      gid: 65532
  run-as: 65532

entrypoint:
  command: /usr/bin/PACKAGE (or auto-detect)

archs:
- x86_64
- aarch64

annotations:
  "org.opencontainers.image.authors": "Chainguard Team https://www.chainguard.dev/"
  "org.opencontainers.image.url": https://edu.chainguard.dev/chainguard/chainguard-images/reference/PACKAGE/
  "org.opencontainers.image.source": https://github.com/chainguard-images/images/tree/main/images/PACKAGE
  1. Search for Dockerfile in upstream repo to fill out entrypoint and metadata fields

  2. Create a 02-helm.sh if possible. Use the ArtifactHub to search Helm package corresponding and in case we find something, append the Helm testing template: (this is a bit tricky since we have to parse the values.yaml to find image.repository and image.tag fields)

$ helm repo add PACKAGE_NAME REPO_ADDR
$ helm upgrade --install PACKAGE_NAME \
    REPO/PACKAGE \
    --set image.repository=cgr.dev/chainguard/PACKAGE \
    --set image.tag=latest-arm64

README: Like this, in the ## Using PACKAGE we can put the entire README here (or auto-find the Usage section if exist, or maybe custom template, or we can even use ChatGPT!)

IDEA: In the future iteration of this sub-command, we can include additional templates for each language (Go, Node, Python, etc.).

Option 2

Same with Option 1, but do not use prompts. Override with given flags instead.

  • `wolfictl init melange [OPTIONS]
  • `wolfictl init apko [OPTIONS]

Open Questions

  1. Should these sub-command only work if active working directory is chainguard-images/images or wolfi/os?
  2. Does it make sense in overall?
  3. Would this CLI repo is right place for such a this feature? (Since the context of this CLI is mostly for Wolfi; what about monopod?)
  4. Any ongoing work or proposal for this? (or maybe similar idea)

/cc @jdolitsky @patflynn @dlorenc

lint: prevent moving source deps

Description

Prevent folks from using fetch without expected-{sha256,sha512}, and from using git-checkout without tag and expected-commit.

For bonus points, prevent folks from using go/build with an import path that doesn't include the ${{package.version}} (e.g., @main or @latest)

This prevents source deps from changing out from underneath us.

wolfictl update package sub-command failed

Description

I was trying to make a PR for bumping the version of the haproxy. And the following shows the command that went wrong.
I am wondering if there were some critical steps that I missed. It would be nice if you guys could give clear guidance for users to contribute ;-)

➜  os git:(main) GITHUB_TOKEN='REDACTED' wolfictl update package haproxy --use-gitsign --version 2.6.11 --target-repo https://github.com/wolfi-dev/os

Enumerating objects: 2467, done.
Counting objects: 100% (2467/2467), done.
Compressing objects: 100% (1104/1104), done.
Total 2467 (delta 1775), reused 1835 (delta 1323), pack-reused 0
2023/03/27 11:46:54 wolfictl update: no previous tag found so checking all commits for sec fixes
2023/03/27 11:46:54 wolfictl update: git log --no-merges
Error: failed to update secfixes: failed to get CVE list from commits between tags <nil> and 2.6.11: failed to get output from git log : exit status 128
2023/03/27 11:46:54 error during command execution: failed to update secfixes: failed to get CVE list from commits between tags <nil> and 2.6.11: failed to get output from git log : exit status 128

OS: macOS Big Sur (11.7.1)
the version of wolfictl: compiled on commit 625e8a4, with command git clone --depth=1 https://github.com/wolfi-dev/wolfictl && cd wolfictl && go install

go env

GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/icecode/Library/Caches/go-build"
GOENV="/Users/icecode/Library/Application Support/go/env"
GOEXE=""
GOEXPERIMENT=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GOMODCACHE="/Users/icecode/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/icecode/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/opt/local/lib/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/opt/local/lib/go/pkg/tool/darwin_amd64"
GOVCS=""
GOVERSION="go1.20.2"
GCCGO="gccgo"
GOAMD64="v2"
AR="ar"
CC="/usr/bin/clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/icecode/Documents/GitHub/wolfictl/go.mod"
GOWORK=""
CGO_CFLAGS="-O2 -g"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-O2 -g"
CGO_FFLAGS="-O2 -g"
CGO_LDFLAGS="-O2 -g"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/ll/3p2lc479391gv2gs3t3x0brw0000gn/T/go-build1736687816=/tmp/go-build -gno-record-gcc-switches -fno-common"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.