Git Product home page Git Product logo

spec's Introduction

WARNING Disclaimer WARNING

With the formation of the Open Container Initiative (OCI), the industry has come together in a single location to define specifications around applications containers. OCI is intended to incorporate the best elements of existing container efforts like appc, and several of the appc maintainers are participating in OCI projects as maintainers and on the Technical Oversight Board (TOB). Accordingly, as of late 2016, appc is no longer being actively developed, other than minor tweaks to support existing implementations.

It is highly encouraged that parties interested in container specifications join the OCI community.

  • The App Container Image format (ACI) maps more or less directly to the OCI Image Format Specification, with the exception of signing and dependencies.
  • The App Container Executor (ACE) specification is related conceptually to the OCI Runtime Specification, with the notable distinctions that the latter does not support pods and generally operates at a lower level of specification.
  • App Container Image Discovery does not yet have an equivalent specification in the OCI project (although it has been discussed and proposed)

For more information, see the OCI FAQ and a more recent CoreOS blog post announcing the OCI Image Format.

App Container

Build Status

appc logo

This repository contains schema definitions and tools for the App Container (appc) specification. These include technical details on how an appc image is downloaded over a network, cryptographically verified, and executed on a host. See SPEC.md for details of the specification itself.

For information on the packages in the repository, see their respective godocs.

What is the App Container spec?

App Container (appc) is a well-specified and community developed specification for application containers. appc defines several independent but composable aspects involved in running application containers, including an image format, runtime environment, and discovery mechanism for application containers.

What is an application container?

An application container is a way of packaging and executing processes on a computer system that isolates the application from the underlying host operating system. For example, a Python web app packaged as a container would bring its own copy of a Python runtime, shared libraries, and application code, and would not share those packages with the host.

Application containers are useful because they put developers in full control of the exact versions of software dependencies for their applications. This reduces surprises that can arise because of discrepancies between different environments (like development, test, and production), while freeing the underlying OS from worrying about shipping software specific to the applications it will run. This decoupling of concerns increases the ability for the OS and application to be serviced for updates and security patches.

For these reasons we want the world to run containers, a world where your application can be packaged once, and run in the environment you choose.

The App Container (appc) spec aims to have the following properties:

  • Composable. All tools for downloading, installing, and running containers should be well integrated, but independent and composable.
  • Secure. Isolation should be pluggable, and the cryptographic primitives for strong trust, image auditing and application identity should exist from day one.
  • Decentralized. Discovery of container images should be simple and facilitate a federated namespace and distributed retrieval. This opens the possibility of alternative protocols, such as BitTorrent, and deployments to private environments without the requirement of a registry.
  • Open. The format and runtime should be well-specified and developed by a community. We want independent implementations of tools to be able to run the same container consistently.

What is the promise of the App Container Spec?

By explicitly defining - separate of any particular implementation - how an app is packaged into an App Container Image (ACI), downloaded over a network, and executed as a container, we hope to enable a community of engineers to build tooling around the fundamental building block of a container. Some examples of build systems and tools that have been built so far include:

  • goaci - ACI builder for Go projects
  • docker2aci - ACI builder from Docker images
  • deb2aci - ACI builder from Debian packages
  • actool - Simple tool to assemble ACIs from root filesystems
  • acbuild - A versatile tool for building and manipulating ACIs
  • dgr - A command-line utility to build ACIs and configure them at runtime
  • baci - A generic ACI build project
  • openwrt-aci - A tool to build ACIs based on OpenWRT snapshots
  • oci2aci - ACI builder from OCI bundle
  • nix2aci - ACI builder that leverages the Nix package manager and acbuild

What are some implementations of the spec?

The most mature implementations of the spec are under active development:

There are several other partial implementations of the spec at different stages of development:

Who controls the spec?

App Container is an open-source, community-driven project, developed under the Apache 2.0 license. For information on governance and contribution policies, see POLICY

Working with the spec

Building ACIs

Various tools listed above can be used to build ACIs from existing images or based on other sources.

As an example of building an ACI from scratch, actool can be used to build an Application Container Image from an Image Layout - that is, from an Image Manifest and an application root filesystem (rootfs).

For example, to build a simple ACI (in this case consisting of a single binary), one could do the following:

$ find /tmp/my-app/
/tmp/my-app/
/tmp/my-app/manifest
/tmp/my-app/rootfs
/tmp/my-app/rootfs/bin
/tmp/my-app/rootfs/bin/my-app
$ cat /tmp/my-app/manifest
{
    "acKind": "ImageManifest",
    "acVersion": "0.8.11",
    "name": "my-app",
    "labels": [
        {"name": "os", "value": "linux"},
        {"name": "arch", "value": "amd64"}
    ],
    "app": {
        "exec": [
            "/bin/my-app"
        ],
        "user": "0",
        "group": "0"
    }
}
$ actool build /tmp/my-app/ /tmp/my-app.aci

Since an ACI is simply an (optionally compressed) tar file, we can inspect the created file with simple tools:

$ tar tvf /tmp/my-app.aci
drwxrwxr-x 1000/1000         0 2014-12-10 10:33 rootfs
drwxrwxr-x 1000/1000         0 2014-12-10 10:36 rootfs/bin
-rwxrwxr-x 1000/1000   5988728 2014-12-10 10:34 rootfs/bin/my-app
-rw-r--r-- root/root       332 2014-12-10 20:40 manifest

and verify that the manifest was embedded appropriately

$ tar xf /tmp/my-app.aci manifest -O | python -m json.tool
{
    "acKind": "ImageManifest",
    "acVersion": "0.8.11",
    "annotations": null,
    "app": {
        "environment": [],
        "eventHandlers": null,
        "exec": [
            "/bin/my-app"
        ],
        "group": "0",
        "isolators": null,
        "mountPoints": null,
        "ports": null,
        "user": "0"
    },
    "dependencies": null,
    "labels": [
        {
            "name": "os",
            "value": "linux"
        },
        {
            "name": "arch",
            "value": "amd64"
        }
    ],
    "name": "my-app",
    "pathWhitelist": null
}

Validating App Container implementations

actool validate can be used by implementations of the App Container Specification to check that files they produce conform to the expectations.

Validating Image Manifests and Pod Manifests

To validate one of the two manifest types in the specification, simply run actool validate against the file.

$ actool validate ./image.json
$ echo $?
0

Multiple arguments are supported, and more output can be enabled with -debug:

$ actool -debug validate image1.json image2.json
image1.json: valid ImageManifest
image2.json: valid ImageManifest

actool will automatically determine which type of manifest it is checking (by using the acKind field common to all manifests), so there is no need to specify which type of manifest is being validated:

$ actool -debug validate /tmp/my_container
/tmp/my_container: valid PodManifest

If a manifest fails validation the first error encountered is returned along with a non-zero exit status:

$ actool validate nover.json
nover.json: invalid ImageManifest: acVersion must be set
$ echo $?
1

Validating ACIs and layouts

Validating ACIs or layouts is very similar to validating manifests: simply run the actool validate subcommmand directly against an image or directory, and it will determine the type automatically:

$ actool validate app.aci
$ echo $?
0
$ actool -debug validate app.aci
app.aci: valid app container image
$ actool -debug validate app_layout/
app_layout/: valid image layout

To override the type detection and force actool validate to validate as a particular type (image, layout or manifest), use the --type flag:

$ actool -debug validate -type appimage hello.aci
hello.aci: valid app container image

Validating App Container Executors (ACEs)

The ace package contains a simple go application, the ACE validator, which can be used to validate app container executors by checking certain expectations about the environment in which it is run: for example, that the appropriate environment variables and mount points are set up as defined in the specification.

To use the ACE validator, first compile it into an ACI using the supplied build_aci script:

$ ace/build_aci

You need a passphrase to unlock the secret key for
user: "Joe Bloggs (Example, Inc) <[email protected]>"
4096-bit RSA key, ID E14237FD, created 2014-03-31

Wrote main layout to      bin/ace_main_layout
Wrote unsigned main ACI   bin/ace_validator_main.aci
Wrote main layout hash    bin/sha512-f7eb89d44f44d416f2872e43bc5a4c6c3e12c460e845753e0a7b28cdce0e89d2
Wrote main ACI signature  bin/ace_validator_main.aci.asc

You need a passphrase to unlock the secret key for
user: "Joe Bloggs (Example, Inc) <[email protected]>"
4096-bit RSA key, ID E14237FD, created 2014-03-31

Wrote sidekick layout to      bin/ace_sidekick_layout
Wrote unsigned sidekick ACI   bin/ace_validator_sidekick.aci
Wrote sidekick layout hash    bin/sha512-13b5598069dbf245391cc12a71e0dbe8f8cdba672072135ebc97948baacf30b2
Wrote sidekick ACI signature  bin/ace_validator_sidekick.aci.asc

As can be seen, the script generates two ACIs: ace_validator_main.aci, the main entrypoint to the validator, and ace_validator_sidekick.aci, a sidekick application. The sidekick is used to validate that an ACE implementation properly handles running multiple applications in a container (for example, that they share a mount namespace), and hence both ACIs should be run together in a layout to validate proper ACE behaviour. The script also generates detached signatures which can be verified by the ACE.

When running the ACE validator, output is minimal if tests pass, and errors are reported as they occur - for example:

preStart OK
main OK
sidekick OK
postStop OK

or, on failure:

main FAIL
==> file "/prestart" does not exist as expected
==> unexpected environment variable "WINDOWID" set
==> timed out waiting for /db/sidekick

spec's People

Contributors

alban avatar cdaylward avatar deepak1556 avatar euank avatar glevand avatar iaguis avatar jonboulle avatar jzelinskie avatar kaorimatz avatar kelseyhightower avatar krnowak avatar krobertson avatar lairdshaw avatar lamielle avatar lucab avatar mboersma avatar mpasternacki avatar namn avatar philips avatar polvi avatar rjnagal avatar s-urbaniak avatar sgotti avatar squeed avatar steveej avatar thockin avatar tklauser avatar uiri avatar vbatts avatar yifan-gu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spec's Issues

discovery: clarifications on use of multiple meta tags

What's the pourpose of multiple discovery meta tags?

  1. Support different aci names formats:

Example (see also #61):

I'm imagining that in my repository I can have also some "noarch" packages.
For getting this, using meta discovery, I should provide two meta tags like these:

<meta name="ac-discovery" content="example.com https://storage.example.com/{name}-{version}-{os}-{arch}.{ext}">
<meta name="ac-discovery" content="example.com https://storage.example.com/{name}-{version}.{ext}">

Trying to discover an aci without providing os and arch, there can be two options:

  • If the ACE, on discovery, automatically sets os and arch to default values (like rocket does), the first template should fail download as there isn't an ACI with that name and the second will work.
  • If the ACE, on discovery, doesn't set default values for os and arch, the first template will fail to render and the second will succeed and will be downloaded.

Both gives the expected result to get "noarch" acis.

A possible downside is that the user can get various warning as some downloads will "correctly" fail.

  1. Multiple mirrors.
<meta name="ac-discovery" content="example.com https://storage.example.com/{name}-{version}-{os}-{arch}.{ext}">
<meta name="ac-discovery" content="example.com https://mirror.storage.example.com/{name}-{version}-{os}-{arch}.{ext}">

For this to fully work there should be a mirror choosing logic done from the ACE during fetching.

  1. Both 1 and 2.
<meta name="ac-discovery" content="example.com https://storage.example.com/{name}-{version}-{os}-{arch}.{ext}">
<meta name="ac-discovery" content="example.com https://mirror.storage.example.com/{name}-{version}-{os}-{arch}.{ext}">
<meta name="ac-discovery" content="example.com https://storage.example.com/{name}-{version}.{ext}">
<meta name="ac-discovery" content="example.com https://mirror.storage.example.com/{name}-{version}.{ext}">

I think that all these use cases should work with the above downsides:

  • various warning as some downloads will "correctly" fail.

Do you see other problems or better solutions for the various points?

If so, do you think that the spec should be enhanced with some examples to clarify the various use cases.

Some clarifications on Fileset.files

From @sgotti on December 8, 2014 16:33

Hi,

I'd like to help on #140 and #142 as they are the base for other features that I'd like to share (and also for #223) but first I have some doubts about the "files" entry in the fileset manifest.

The "files" entry is a list of all the files provided by this fileset.

Looking it from another point of view, taking an incremental approach (like, if I understand it correctly, docker does and also like gnu tar incremental archives) won't fit correctly as a fileset can have multiple dependencies applied on different paths (dependencies.root entry). So the solution is to record in the fileset the files provided.

A typical example should be a fileset containing a base distro image. Than on top of it you create another fileset with some new packages installed and other packages removed. This fileset will have all the changed and new files and a fileset manifest where "files" is the list of all the provided files.

So just considering this simpe example (a fileset having 1 dependency with "rootfs": "/")

  • If "files" isn't empty than all the files (and maybe strange, also the one provided by this fileset) not listed should be removed from the final layout.
  • if "files" is empty (as the specs defines it as optional) that nothing is removed.

Some questions:

  1. What to do if all the files inside a directory from a parent fileset are removed leaving an empty directory? The implementation should also remove the directory or leave an empty directory?
  2. If the empty directory should be removed how to keep empty dirs? (sometime it's needed to leave empty directories)

I was thinking about a possibile solutions:

*) specify inside "files" also the needed directories (if a name is ending with a slash than it's a directory). To be clear, it doesn't mean the all the files starting from the directory should be whitelisted but just that that the directory shouldn't be removed.

Any other ideas or am I missing something?

Copied from original issue: rkt/rkt#236

Add a host device to the container

Hello, would it be possible to add to specification usecase of passgin host device into container? I am buildling appliance where a custom device for high computation card is created via kernel module. I would like to pass this device to container.

something like

docker run --device /dev/dev.XX container-app

app-container: add a library to convert docker image to ACI

From @philips on December 1, 2014 23:52

It would be interesting to make the app-container spec inter-op with docker registries. The pkg should have an entry point where the user can start with a docker registry URL and tag (e.g. quay.io/coreos/etcd:v0.4.6) then:

  • Talk to the registry API and fetch the layers
  • Download the layers and convert them to images (How do we label these images? name={url},layer={layerid})
  • For the root image use the version as the final tag

Copied from original issue: rkt/rkt#142

Clarifications on dependency matching and split between fetching and getting.

The spec in the "Dependency matching" chapter says that every dependency should be discovered using the ACI discovery process and then, just the downloaded image should be verified with the requested labels or matching ImageID.

But, in #16 this process is split in two phases: ACI fetching and ACI rendering.
I think that the splitting between the fetching phase and the rendering phase is good. Someone can, for example, not fetch the image from a repository but just import an aci manually in the store. Or there can be an offline/caching mode.

So the ACI rendering will be executed independently from the ACI fetching.
In the ACI rendering phase the ACI will be requested to the store providing an imageID or an app name and some labels. In the latter case the store can find more than one ACI that matches the requested App name and labels.

Which one should be returned?

I have some possible cases in mind:

Case 1

If I'm requesting to the store an App name without any label, all the ACIs with that App Name will match your request.

Which one should be returned?
Should it be implementation dependent or should it be defined in the spec?

The current implementation i tried to do in rkt/rkt#297 uses the latest aci imported in the store.

Case 2

Another example can be the "latest" pattern: If I'm requesting to the store an App without any version label, how can I satisfy the "latest" pattern?

  • Fetch phase: In this phase, as I'm not passing a specific version label, the aci discovery will download something like https://example.com/appname-latest-linux-amd64.aci. On the http server it can be a link to an aci called appname-2.0.1-linux-amd64.aci and in its manifest the version label will be "2.0.1".
  • Rendering phase: I'm requesting from the store an ACI named appname and without any version label. The store can have other ACIs previously downloaded or imported for this app name with different version (for example "1.0.0" "strangereleasename" etc...).
    As the version label is just a string without any SemVer or lexical ordering rule there isn't an ordering in these labels.

As in [Case 1](#Case 1) the implementation i tried to do in rkt/rkt#297 chooses the latest imported ACI but prefers ACIs imported with the "latest" flag (its saved in an appinfo additional index):

On fetching, if downloaded with the "latest" pattern, the ACI is saved in the store with the "latest" flag. When requesting an aci without specifying a version label the ones with the latest flag are preferred over the others (or perhaps only the one with the latest flag should be choosed?).

  • Are my thoughts correct?
  • If so, there should be possible ACI choosing logics other then the one I tried to implement? Should this logics be clarified in the SPEC?
  • If so, should the spec be clarified (also with some examples)?

spec: HTTP Redirection during image discovery?

The discovery spec says:

If the first attempt at fetching the discovery URL returns a status code other than 200 OK or does not contain any ac-discovery meta tags then the next higher path in the name should be tried.

I think the spec should specifically state what it does if it receives a 3xx response.

I'd vote for following redirects as while good URLs should work forever, we all know that's not how the real world works.

spec: updating isolators

The spec should describe how executors are able to update isolators dynamically during the lifetime of a container.

Ownership, Governance, TM, etc

From @mikeal on December 1, 2014 23:40

I hate to be "that guy" but I would like to know what the plan is here in terms of ownership, governance, and the trademark, although I'm skeptical you could actually trademark "rocket" :P

The post announcing this project cited some divergences in philosophy from Docker (the project and the company). It would be good to know if this project intends to be a pure community effort or if this is just CoreOS disagreeing with Docker.

It would be great to see a contribution policy that laid out how people get involved in the project and contribute to decision making and how contentious issues like the ones that have coalesced in Docker that lead to this project would be resolved. If not it would seem like people are just trading Docker (the company) for CoreOS (the company).

Copied from original issue: rkt/rkt#139

spec: dependencies details > manifest merge strategy

While the spec specifies how acis are being extracted on one another (as long as ̀pathWhitelist allows it), it does not gives informations about the manifest.

As first proposal what about these "merge strategy" options :

  • none : keep original manifest unmodified,
  • array : assembles a value from every matching level of the hierarchy,
  • hash : recursively merge hash keys.

The behaviour description must be shipped inside the image but (and that's the part I don't like!!) the option it self should never be overwritten.

(I think we loose here the simplicity we want for the spec, but the problem do exists so I'm just hoping this proposal will help as a starting point for a better answer)

spec: clarify pod uuids

CRMs have a uuid but it is underspecified; we should explicate the purpose and how it might be generated.

Also, in the metadata server it's referred to as uid, so this should be changed to uuid for consistency.

Standard for "os" and "arch" definition

From @sarnowski on December 8, 2014 15:23

Hi,

is there a list of standard naming for the "os" and "arch" definitions?

os:

  • linux
  • windows(?)
  • macosx (?)
  • ... (?)

arch:

  • amd64
  • i386 (?)
  • armel (?)
  • ...

One cannot and should not cover all cases but I think to avoid some chaos, it makes sense to suggest some defaults.

Copied from original issue: rkt/rkt#234

spec: application's working directory

Is there an actual reason why the application is required to start in the container's root directory, or is it just scope reduction for first version of the spec?

There is no standard utility that I know of, which would do chdir() and then exec() the provided command; if the command to run needs a specific working directory, it would need to be wrapped in a shell script, or its exec would be something like ["/bin/sh", "-c", "cd /work/dir && exec \"${@}\"", "arvg0", ACTUAL_COMMAND…], which does work, but is kind of ugly, as the app executor could take care of that.

spec: clarify relationship between CRM and "apps"

(This is partly leading on from the discussion in rkt/rkt#294)

A Container Runtime Manifest declares one or more apps to be executed. At the time of writing the format of declaration looks like this:

  • apps the list of apps that will execute inside of this container
    • app the name of the app (string, restricted to AC Name formatting)
    • imageID the content hash of the image that this app will execute inside of (string, must be of the format "type-value", where "type" is "sha512" and value is the hex encoded string of the hash)
    • isolators the list of isolators that should be applied to this app (key is restricted to the AC Name formatting and the value can be a freeform string)
    • annotations arbitrary metadata appended to the app (key is restricted to the AC Name formatting and the value can be a freeform string)

There are several issues with this schema:

  • Each "app" technically refers to an app image (ACI), which may or may not actually contain an app declaration (since "app" is an optional section of the Image Manifest). The spec is unclear as to what will occur if the referenced image does not contain an app declaration. (Relatedly, "app" should likely more accurately be called "name")
  • "app" references are not unique. There is a fairly simple use case of wanting to use a particular app-image multiple times within a container, but it is unclear how this behaviour should be represented in the spec, or at least how implementations should go about handling it (currently in Rocket we actually mandate that apps within a CRM must be unique)
  • For each app, there is only a partial ability to override some of the parameters defined in the Image Manifest. For example, currently isolators (from the app section of the Image Manifest) and annotations can be overridden/extended; but there is no such ability to override the exec, eventHandlers or environment

This is closely related to, but not the same as, #83

One possible solution is to extend the schema of app references in the CRM to match that of the Image Manifest. In this way, all of the parameters in the Image Manifest could be explicitly overridden at a later stage. However, this does potentially mean a lot of duplication between the information contained in the IM and the CRM.

Shared memory for containers

Hello, would it be possible to specify some IPC mechanisims for containers? Something like --ipc switch to docker does. Basically I need more than one container to have access to a shared memory.

spec: clarify relationship between CRM and ACE

There is a possible conflation between the CRM and the ACE runtime environment, which might need to be teased apart. In the lifecycle of an app image, between the points of "image manifest creation" (when it is bundled into an ACI) and "application execution" (when the ACE actually executes app(s) in a container), there are conceptually two points at which parameters from the image manifest can potentially be overridden/extended:
1. when the CRM is generated
2. at runtime by the ACE

Currently, in rocket, we generate a CRM at runtime and use that as the final artefact to be executed (let's call that the executed unit). That is, rocket essentially conflates 1) and 2), by encoding all of its "ACE overriding" into the generated CRM.

There is a lot of value in this model because it means the CRM is essentially immutable, and so for a given executor, two invocations of the same CRM should be effectively identical. (It also makes it convenient because the executor can make various assumptions about the environment - for example where images are located, whether they are unique, etc - when generating the CRM.

However, critically, it makes it impossible to use CRMs as a portable config (let's call that a deployable unit), because it inherently contains executor-specific information.

Here are two potential solutions:

  • Have the CRM be the deployable unit, and allow ACEs to override parameters at runtime. This has the disadvantage that there is now a discrepancy between what the CRM defines and what is actually being executed.
  • Have the CRM be the deployable unit and introduce another CRM as the executable unit (which may or may not be identical to the deployable unit); the ACE consumes a CRM and then must generate a new CRM (overriding/extending whatever parameters it desires) to execute. This retains the advantage of the CRM-that-is-executed being immutable, but introduces the potentially confusing discrepancy between the CRM supplied to the ACE (deployable unit) and the CRM that is actually run (executable unit).

spec: clarification for event handlers

  1. If multiple event handlers of the same name are defined, should they be executed in parallel, or one after another?
  2. Are event handlers executed as root, or as parent application's user/group? A good use case for handlers would be privileged initialization (e.g. creating and chown-ing workspace for process, which requires root) to let the main process run as a non-privileged user.
  3. Do event handlers run with parent app's environment, or with a clean one?

It may be useful to have event handlers more parameterized: it should be possible to specify at least user, group, and environment for the handler. If you are not opposed to that idea, I will prepare a pull request.

spec: get rid of fulfills for the volumes

From @philips on December 3, 2014 2:48

Right now volumes in the container can fulfill particular labels. This is a bit troublesome because there might be colliding names for the same volume.

https://github.com/coreos/rocket/blob/master/app-container/SPEC.md#container-runtime-manifest-schema

Instead of doing this give each volume a name and each app will get volumes filled in by name to its mountpoints.

    "apps": [
        {
            "app": "example.com/reduce-worker-1.0.0",
            "imageID": "sha256-277205b3ae3eb3a8e042a62ae46934b470e431ac",
            "mountPoints": [{"src": "work", "dest": "data"}]
        },
        {
            "app": "example.com/worker-backup-1.0.0",
            "imageID": "sha256-3e86b59982e49066c5d813af1c2e2579cbf573de",
             "mountPoints": [{"src": "buildoutput", "dest": "backup"}]
        },
    ],
    "volumes": [
        {"name": "work", "kind": "host", "source": "/opt/tenant1/work", "readOnly": true},
        {"name": "buildoutput", "kind": "empty"}
    ],

Copied from original issue: rkt/rkt#182

wanted: xz and bzip2 compressors in Go

From @philips on December 2, 2014 0:0

Currently ACI's can be compressed with xz or bzip2 but actool build can only do gzip compression because Go libraries don't exist for xz or bzip2.

This feature would require implementing these specs in pure-go. Shelling out or linking to a C library won't fix this bug.

Copied from original issue: rkt/rkt#143

ContainerRuntimeManifests "apps"

The items in the ContainerRuntimeManifest apps list refer to images. The app key in the ImageManifest is optional. What should be the behavior when the image has no app defined?

actool: unable to build from debootstrap

From @brosner on December 3, 2014 21:27

I am attempting to build an ACI from a rootfs built by debootstrap. This is failing:

$ debootstrap --verbose --variant=minbase --include=iproute,iputils-ping,gpgv --arch=amd64 lucid rootfs http://mirror.rackspace.com/ubuntu
...
I: Base system installed successfully.

$ actool build --app-manifest manifest.json rootfs ubuntu.aci
build: Error walking rootfs: open rootfs/dev/agpgart: no such device

I believe I tracked this issue down to the handling of files here: https://github.com/coreos/rocket/blob/master/app-container/actool/build.go#L60-L73

Copied from original issue: rkt/rkt#198

CRM: allow overriding/extending app environment

It would be convenient to be able to create a single image and then use it in different environments (production, staging, qa, ...). This is currently not possible because there is no way to parametrize the executables which are running inside a container. The executable, its arguments, and environment variables are defined in the ImageManifest and the CRM can not override those.

I propose adding a new field to CRM apps.app with the name environment and value same as ImageManifest app.environment. The executor would then merge the CRM app environment into the IM app environment (allowing the former to override the environment of the later).

Use SHA-512 because it's faster

From @davidstrauss on December 3, 2014 4:32

A modern 64-bit processor can compute a SHA-512 hash in about 50-75% of the time it takes to create a SHA-256 hash. (This is because SHA-512 primarily uses 64-bit data types.) Considering that container images can become rather large, this is substantial.

If the concern is over the length of the hash output, SHA-512/256 is a nice standard; it's based on computing a SHA-512 and only using half of the resulting hash.

Copied from original issue: rkt/rkt#187

spec: dependencies details > fs paths

I know we must think simplicity, but while thinking of real life use cases, I though we'll end up to be able to specify the from and to between dependencies ?

Image Manifest sample (example)

...
        "dependencies": [
        {
            "app": "example.com/reduce-worker-base",
            "imageID": "sha512-...",
            "srcPath": "rootfs/opt/mylib/",
            "dstPath" : "rootfs/opt/myapp/vendors/reduce-worker-base",
            "labels": [
                {
                    "name": "os",
                    "value": "linux"
                },
                {
                    "name": "env",
                    "value": "canary"
                }
            ]
        }
    ],
...

What do you think ?

Versioned Container

Already asked at rocket, but moved over to the appc project.

It would be great to have versioned container as seen in docker where we have a versioned filesystem like described here:

http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html

specially the use of sub volumes and the way how to send and receive the changes could be interesting.

I'm a bit in hurry but promised to at least open the issue. Some ideas will follow.

Log Metadata And App Log metadata

Log metadata about the container log. Retrievable at http://$AC_METADATA_URL/acMetadata/v1/log

App log metadata about the process log run in the container. Retrievable at http://$AC_METADATA_URL/acMetadata/v1/apps/${ac_app_name}/log

spec: clarify compression/encryption ordering of ACIs

From @sarnowski on December 8, 2014 15:4

Hello,

I am currently implementing writing and reading of ACIs and I am unsure about the exact ordering where encryption fits in. I think I know how its meant in the specification but maybe it makes sense to make it more clear. From my understanding the procedure is as follows:

Writing:

  1. write tarball
  2. hash tarball with sha256 for *.sig file (currently, maybe 512 in the future?)
  3. [optionally] compress with gzip/bzip2/xz
  4. [optionally] encrypt

Reading:

  1. [optionally] unencrypt with a key (how do I determine the key and algorithm for it? how do I find out that I have to decrypt first, try it out if no compression file format is detected?)
  2. [optionally] uncompress the ACI (is there any way planned how to more easily detect the file format besides besides introspection?)
  3. hash tarball and compare with hash from the *.sig file
  4. unpack tarball

Thanks,
Tobi

Copied from original issue: rkt/rkt#233

Spec difficult to understand surrounding mount namespace

From @eparis on December 1, 2014 20:26

https://github.com/coreos/rocket/blob/master/app-container/SPEC.md#container-runtime-manifest

States that all apps in a container share the same mount namespace. But then goes on to talk about how volumes are mounted into specific apps.

Apps sharing the mount namespace seems really hard to get right, so I'm assuming that different apps in the same container have different mount namespaces?

Copied from original issue: rkt/rkt#127

spec: minimal set of os-specific device nodes

For the apps to be able to have reasonable expectations of what to find at places like /dev and /proc, the spec should probably explicitly state what the execution environment must provide.

For linux we could just conform to http://www.freedesktop.org/wiki/Software/systemd/ContainerInterface/, though there's quite a bit going on there, some of which probably isn't necessary in the app's root. Perhaps we should instead require LSB conformance: http://refspecs.linuxfoundation.org/LSB_2.0.0/LSB-Core/LSB-Core/execenvfhs.html

What about the other operating systems?

Is it considered beyond the scope of the spec?

spec: signing clarification

The spec says:

Image archives MUST be a tar formatted file. The image may be optionally compressed with gzip, bzip2, or xz. After compression, images may also be encrypted with AES symmetric encryption.

tar cvf reduce-worker.tar app rootfs
gpg --output reduce-worker.sig --detach-sig reduce-worker.tar
gzip reduce-worker.tar -c > reduce-worker.aci

At the same time, there is no mention of the second line command. Putting the signature on tar. Is it optional or mandatory?

Spec first read feedback

From @thockin on December 2, 2014 2:25

Collected notes as I read through. Sorry for the length. If any of these become worth discussing, we can fork to different topics.

This changes the established naming from what Docker calls a container to what Kubernetes calls a pod. I think that changing the meaning of "container" at this point might be detrimental to the overall comprehension of the system. I would propose to keep container to mean what you call app-container and define a different word for a set of containers.

Example use case talks about "puts them into its local on-disk cache" and "extracts two copies". This sets off immediate alarm bells for me - disk IO is the single most contended resource, in our experience. To be successful, this spec really must be implementable with a minimum of disk IO. For example, I should be able to mount a pre-built cache of images and satisfy container run requests. That's not to say that disk IO can not satisfy the spec, but it must not be a requirement.

SIGTERM should be just one kind of termination signal

Files in an images "must maintain all of their original properties" - can this include capabilities?

One thing Docker does well is differentiate WHAT to run from HOW to run it. Does this address that idea? There's some overlap in lifecycle stuff, but some other things like resources are clearly dependent on how a container will be used. How about command line flags? Being able to take a pre-built container, such as ubuntu, and run things in it without creating and pushing a new container is powerful.

That rkt is not a daemon is very similar to what we were pushing with lmctfy. However, there are things that (currently) are hard to do without any daemon - an example we run up against is prioritized OOM handling. This spec should carefully consider the strata of the overall system and how some things cross between them.

Volumes are under-specified. Can I mount them at different places in each app? Can I not mount them in some apps?

Network: Does each APP get a network or each container? I'm a bit unclear on the naming and distinction between container and app. Does rkt provide "out of the box" network at all? Docker's got this part pretty well, even if it is terribly slow.

AC_APP_NAME - what does "the entrypoint that this process was defined from" mean?

Isolators: Who do I have to beg to NOT reinvent this, and instead use something derived from LMCTFY? We have captured YEARS of development in LMCTFY's concepts. For example, exposing CPU shares is a mess. If there are things about LMCTFY's structures that don't work, let's iron those out instead..

You say "if the ACE is looking for example.com/reduce-worker-1.0.0 it will request: https://example.com/reduce-worker?ac-discovery=1". What principle lead to some piece of software knowing where to split that string?

You make some remark about /etc/hosts being assumed present and parsed by libc. Does this mean you won't provide /etc/hosts, /etc/resolv.conf, etc? That seems like a bad idea.

meanYou say config-drives "make assumptions about filesystems which are not appropriate for all environments" - can you explain? Why is it not sufficient to say that config can also be found in a volume, if the user so prefers? The host environment should be able to provide that.

"name": "example.com/reduce-worker-1.0.0" - why is version just jammed in there? Or are you trying to say "version is opaque" and if you want it, you have to embed it in the name?

Can we define user and group as name strings, not as numerics (and why are they string numbers?)

Why do you need a private network flag? This hasn't really answered how networking will work. This has been a huge PITA with Docker because EVERYONE has a different idea of what they want in networks. You can NOT support flags for all of them, so I'd argue to support none of them and instead define that as a plugin, and ship some examples of network plugins but leave it open.

Copied from original issue: rkt/rkt#151

discovery: walk up the tree

From @philips on December 1, 2014 7:43

For example if the user has example.com/project/subproject and we first try example.com/project/subproject but don't find a meta tag then try example.com/project then try example.com.

Copied from original issue: rkt/rkt#111

validation: sidekick aci should remove tmp file in post-run (or make a note)

When the ace-validator-sidekick image is run per the instructions on the Rocket "App Container Basics" https://github.com/coreos/rocket#app-container-basics it will leave a file /tmp/sidekick. A subsequent run will fail noting "/db/sidekick unexpectedly exists".

Either the ace-validator-sidekick aci post run should remove the file so that subsequent runs will pass
or it should make a note to STDOUT that the file must be removed before a follow up run will pass.

spec: missing expected behaviour

This is more like a general purpose issue to mention that the specs lacks of informations on expected behaviours. While reading the spec, many times I thought "how would I handle it with this particular use case if I had to implement it ?"

For example for pathWhitelist :

  • does the runtime (container executor) juste filter out the allowed paths from dependency acis, or will it raise an error if some dependency container contains files outside the allowed paths ?
  • "This field is only required if the app has dependencies and you wish to remove files from the rootfs before running the container" => Ok i now have my answer (files from dependency container but outside the folder path are just dropped) but will it raise an error if the field is missing ? (does "required" means "mandatory" here, not sure..)
  • "an empty value means that all files in this image and any dependencies will be available in the rootfs." => here again : what about a missing value ?

actool: take manifests in yaml format

From @philips on December 3, 2014 18:30

JSON is difficult for people to write and a number of people have asked for
another format for the manifest. Instead of complicating the spec with multiple
serialization formats for the ACI manifests add a feature to actool build to
take manifest files with a .yaml extension and convert the YAML to JSON before
laying the manifest down.

/cc @jonboulle thoughts on this?

Copied from original issue: rkt/rkt#195

Separate tools (golang) from spec

I know we just created this repo, but I propose moving the tools such as the schema and validator to separate repos called appc-tools-golang and/or appc-schema-golang.

That way, we can keep spec issues isolated here, and bugs in the tools in the other repos.

I can possibly do this, if there is a consensus.

spec: support HTTP headers for discovery

From @bacongobbler on December 4, 2014 17:36

I was sifting through the app-container spec on image discovery when I stumbled across this:

If simple discovery fails, then we use HTTPS+HTML meta tags to resolve an app name to a downloadable URL.

This works for the general use case, but some URLs may not have a content type of text/html e.g. in the case of an API.

For example, if I had a SaaS product and I wanted to have a discoverable URL to our API's ACI, I'd like a URL like https://example.com/api. The top-level domain (https://example.com/) would be our landing page (implying text/html), but https://example.com/api may be where our SaaS's API begins (in this case, application/json or some equivalent).

Would it be possible to extend the image discovery to use HTTP headers (HTTP_X_AC_DISCOVERY?) in the case that the content-type is not text/html? Is there a way to work around this, or would a PR for this type of extension be appreciated? :)

Copied from original issue: rkt/rkt#210

discovery: be more discerning about HTTP errors

Right now discovery just recursively walks up the tree on any error, alternately trying https and http. For example, when discovering the name example.com/foo/bar/baz, it will make requests like so:
https://example.com/foo/bar/baz
http://example.com/foo/bar/baz
https://example.com/foo/bar/
http://example.com/foo/bar/
...

However for a certain class of errors, it doesn't make much sense to do this; for example if dialing example.com:443 times out the first time it's probably not worth retrying it for every branch. Some errors should just propagate directly (fail early)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.