Git Product home page Git Product logo

Comments (1)

mtrmac avatar mtrmac commented on July 2, 2024

Basically because transports and their configuration, and signature verification are conceptually quite separate things, and because sigstore is a docker/distribution-specific idea which does not fit the generic signature policy.

In more detail:

policy.json does not care where anything about the images is really stored, neither the manifests nor the signatures, or what is necessary to access anything; as far as it cares, transportsdockerdocker.io/library/busybox are just names which allow choosing a relevant set of requirements from the policy file.

OTOH, the containers/image.types.ImageTransport abstraction allows reading and writing images, whatever the policy is. In particular, docker: is only one of the five currently supported transports; none of the others use the sigstore mechanism; it would be a bit weird to have sigstores as a policy.json field.

In practical terms,

  • Noone should need to define a policy—let alone a restrictive policy—just to be able to read/write/copy around signatures. It should be possible to have a policy of insecureAcceptAnything while skopeo copy copies all existing signatures between sources and destinations.
    • Actually, policy.json is not involved at all when writing images to destinations; we don’t have any restrictions on saving images, why would we? So it would be pretty weird to have policy.json sections which we don’t need for anything but their sigstore-staging URLs.
  • The policy allows defining multiple requirements per namespace (“has a signature by the department who created it” + “is based on a signed ISV image”). If the sigstore location were defined in each policy requirement, would the signatures need to be read from all paths specified by each of the requirements? Would the code be required to validate the signatures against each requirement before skopeo copy were allowed to present is as “correctly stored” to the rest? It gets messy.
  • policy.json is parsed very strictly, to avoid mishaps due to typos: unknown keys and duplicates are rejected, and any failure stops the program in its tracks with absolutely no effort to recover or partially work. Generally we probably want the configuration mechanisms to be a bit more forgiving [or, well, managed by a tool and an API instead of raw files, but this is UNIX and here we are], so as little as possible should be in policy.json.

Of course this is all just code and data structures, and we can define any number of equivalent formulations of data structures, and any number of serializations, so we could have an all-in-one integrated config file which contains signature verification policies and CA keys and hostname overrides and “use older protocol version” overrides and…

perhaps something like

{
  "transports": {
    "docker": {
      "docker-registries": {
        "docker.io": {
          "insecure": false,
          "ca-certificate": "$base64-data",
          "v1-fallback": false,
          "enforce-atomic-signatures": true,
      },
      "namespaces": {
        "docker.io/library/busybox": {
          "sigstore": "$url",
          "sigstore-staging": "$url2",
          "policyRequirements": [{
            "type": "signedBy",
            // …
          }]
       }
   }
}

and similarly keep CAs and what not for atomic in here. It's not obvious to me that this is really easier to use, given the extra nesting needed, while we still retain that paranoid parser.

Also writing tools like atomic trust which need to understand the contents of the file is noticeably more difficult with an integrated format: At the moment policy.json does not say much, but what it says is critical, so a tool to interpret policy.json really should loudly fail if it finds anything unexpected. If we had a single config file for policy and everything else, the tools would really have to silently ignore unrecognized fields to be able to move at any practical speed, and then how certain are we that it is always safe to ignore fields (see that “enforce-atomic-signatures” field; something like that is being added to projectatomic/docker right now)?

from image.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.