Git Product home page Git Product logo

cri-o's Introduction

CRI-O logo

CRI-O - OCI-based implementation of Kubernetes Container Runtime Interface

Stable Status codecov Packages Release Notes Dependencies GoDoc OpenSSF Best Practices Go Report Card FOSSA Status Mentioned in Awesome CRI-O Gitpod ready-to-code Arm CI sponsored by Actuated

Compatibility matrix: CRI-O ⬄ Kubernetes

CRI-O follows the Kubernetes release cycles with respect to its minor versions (1.x.y). Patch releases (1.x.z) for Kubernetes are not in sync with those from CRI-O, because they are scheduled for each month, whereas CRI-O provides them only if necessary. If a Kubernetes release goes End of Life, then the corresponding CRI-O version can be considered in the same way.

This means that CRI-O also follows the Kubernetes n-2 release version skew policy when it comes to feature graduation, deprecation or removal. This also applies to features which are independent from Kubernetes. Nevertheless, feature backports to supported release branches, which are independent from Kubernetes or other tools like cri-tools, are still possible. This allows CRI-O to decouple from the Kubernetes release cycle and have enough flexibility when it comes to implement new features. Every feature to be backported will be a case by case decision of the community while the overall compatibility matrix should not be compromised.

For more information visit the Kubernetes Version Skew Policy.

CRI-O Kubernetes Maintenance status
main branch master branch Features from the main Kubernetes repository are actively implemented
release-1.x branch (v1.x.y) release-1.x branch (v1.x.z) Maintenance is manual, only bugfixes will be backported.

The release notes for CRI-O are hand-crafted and can be continuously retrieved from our GitHub pages website.

What is the scope of this project?

CRI-O is meant to provide an integration path between OCI conformant runtimes and the Kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of CRI-O is tied to the scope of the CRI.

At a high level, we expect the scope of CRI-O to be restricted to the following functionalities:

  • Support multiple image formats including the existing Docker image format
  • Support for multiple means to download images including trust & image verification
  • Container image management (managing image layers, overlay filesystems, etc)
  • Container process lifecycle management
  • Monitoring and logging required to satisfy the CRI
  • Resource isolation as required by the CRI

What is not in the scope of this project?

  • Building, signing and pushing images to various image storages
  • A CLI utility for interacting with CRI-O. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backward compatibility with it.

CRI-O is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.

The plan is to use OCI projects and best of breed libraries for different aspects:

It is currently in active development in the Kubernetes community through the design proposal. Questions and issues should be raised in the Kubernetes sig-node Slack channel.

Roadmap

A roadmap that describes the direction of CRI-O can be found here. The project is tracking all ongoing efforts as part of the Feature Roadmap GitHub project.

CI images and jobs

CRI-O's CI is split-up between GitHub actions and OpenShift CI (Prow). Relevant virtual machine images used for the prow jobs are built periodically in the jobs:

The jobs are maintained from the openshift/release repository and define workflows used for the particular jobs. The actual job definitions can be found in the same repository under ci-operator/jobs/cri-o/cri-o/cri-o-cri-o-main-presubmits.yaml for the main branch as well as the corresponding files for the release branches. The base image configuration for those jobs is available in the same repository under ci-operator/config/cri-o/cri-o.

Commands

Command Description
crio(8) OCI Kubernetes Container Runtime daemon

Examples of commandline tools to interact with CRI-O (or other CRI compatible runtimes) are Crictl and Podman.

Configuration

File Description
crio.conf(5) CRI-O Configuration file
policy.json(5) Signature Verification Policy File(s)
registries.conf(5) Registries Configuration file
storage.conf(5) Storage Configuration file

Security

The security process for reporting vulnerabilities is described in SECURITY.md.

OCI Hooks Support

You can configure CRI-O to inject OCI Hooks when creating containers.

CRI-O Usage Transfer

We provide useful information for operations and development transfer as it relates to infrastructure that utilizes CRI-O.

Communication

For async communication and long running discussions please use issues and pull requests on the GitHub repo. This will be the best place to discuss design and implementation.

For chat communication, we have a channel on the Kubernetes slack that everyone is welcome to join and chat about development.

Awesome CRI-O

We maintain a curated list of links related to CRI-O. Did you find something interesting on the web about the project? Awesome, feel free to open up a PR and add it to the list.

Getting started

Installing CRI-O

To install CRI-O, you can follow our installation guide. Alternatively, if you'd rather build CRI-O from source, checkout our setup guide. We also provide a way in building static binaries of CRI-O via nix as part of the cri-o/packaging repository. Those binaries are available for every successfully built commit on our Google Cloud Storage Bucket. This means that the latest commit can be installed via our convenience script:

> curl https://raw.githubusercontent.com/cri-o/packaging/main/get | bash

The script automatically verifies the uploaded sigstore signatures as well, if the local system has cosign available in its $PATH. The same applies to the SPDX based bill of materials (SBOM), which gets automatically verified if the bom tool is in $PATH.

Besides amd64, we also support the arm64, ppc64le and s390x bit architectures. This can be selected via the script, too:

curl https://raw.githubusercontent.com/cri-o/packaging/main/get | bash -s -- -a arm64

It is also possible to select a specific git SHA or tag by:

curl https://raw.githubusercontent.com/cri-o/packaging/main/get | bash -s -- -t v1.21.0

The above script resolves to the download URL of the static binary bundle tarball matching the format:

https://storage.googleapis.com/cri-o/artifacts/cri-o.$ARCH.$REV.tar.gz

Where $ARCH can be amd64,arm64,ppc64le or s390x and $REV can be any git SHA or tag. Please be aware that using the latest main SHA might cause a race, because the CI has not finished publishing the artifacts yet or failed.

We also provide a Software Bill of Materials (SBOM) in the SPDX format for each bundle. The SBOM is available at the same URL like the bundle itself, but suffixed with .spdx:

https://storage.googleapis.com/cri-o/artifacts/cri-o.$ARCH.$REV.tar.gz.spdx

Running Kubernetes with CRI-O

Before you begin, you'll need to start CRI-O

You can run a local version of Kubernetes with CRI-O using local-up-cluster.sh:

  1. Clone the Kubernetes repository
  2. From the Kubernetes project directory, run:
CGROUP_DRIVER=systemd \
CONTAINER_RUNTIME=remote \
CONTAINER_RUNTIME_ENDPOINT='unix:///var/run/crio/crio.sock' \
./hack/local-up-cluster.sh

For more guidance in running CRI-O, visit our tutorial page

The HTTP status API

CRI-O exposes per default the gRPC API to fulfill the Container Runtime Interface (CRI) of Kubernetes. Besides this, there exists an additional HTTP API to retrieve further runtime status information about CRI-O. Please be aware that this API is not considered to be stable and production use-cases should not rely on it.

On a running CRI-O instance, we can access the API via an HTTP transfer tool like curl:

$ sudo curl -v --unix-socket /var/run/crio/crio.sock http://localhost/info | jq
{
  "storage_driver": "btrfs",
  "storage_root": "/var/lib/containers/storage",
  "cgroup_driver": "systemd",
  "default_id_mappings": { ... }
}

The following API entry points are currently supported:

Path Content-Type Description
/info application/json General information about the runtime, like storage_driver and storage_root.
/containers/:id application/json Dedicated container information, like name, pid and image.
/config application/toml The complete TOML configuration (defaults to /etc/crio/crio.conf) used by CRI-O.
/pause/:id application/json Pause a running container.
/unpause/:id application/json Unpause a paused container.

The subcommand crio status can be used to access the API with a dedicated command line tool. It supports all API endpoints via the dedicated subcommands config, info and containers, for example:

$ sudo crio status info
cgroup driver: systemd
storage driver: btrfs
storage root: /var/lib/containers/storage
default GID mappings (format <container>:<host>:<size>):
  0:0:4294967295
default UID mappings (format <container>:<host>:<size>):
  0:0:4294967295

Metrics

Please refer to the CRI-O Metrics guide.

Tracing

Please refer to the CRI-O Tracing guide.

Container Runtime Interface special cases

Some aspects of the Container Runtime are worth some additional explanation. These details are summarized in a dedicated guide.

Debugging tips

Having an issue? There are some tips and tricks for debugging located in our debugging guide

Adopters

An incomplete list of adopters of CRI-O in production environments can be found here. If you're a user, please help us complete it by submitting a pull-request!

Weekly Meeting

A weekly meeting is held to discuss CRI-O development. It is open to everyone. The details to join the meeting are on the wiki.

Governance

For more information on how CRI-O is goverened, take a look at the governance file

License Scan

FOSSA Status

cri-o's People

Contributors

14rcole avatar alexlarsson avatar cevich avatar cyphar avatar dependabot[bot] avatar dfr avatar fidencio avatar giuseppe avatar haircommander avatar k8s-ci-robot avatar kannon92 avatar klihub avatar kolyshkin avatar kwilczynski avatar mheon avatar mrunalp avatar mtrmac avatar nalind avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar rhatdan avatar runcom avatar saschagrunert avatar sohankunkerkar avatar tedyu avatar umohnani8 avatar vbatts avatar vrothberg avatar wking avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cri-o's Issues

Add support for AppArmor

AppArmor are passed as annotations in CRI. We need to process those keys in annotations and set AppArmor in the runc config.json.

Fix seccomp for localhost profiles

Seccomp are passed as annotations in CRI. We need to process those keys in annotations and set Seccomp in the runc config.json.

runtime/default has done (#211). Still waiting for kubernetes/kubernetes#36997:

localhost/ is relative to node's local seccomp profile root, it is defined in kubelet. But runtime doesn't know it. We should pass full profile path in CRI.

Why pushing is not in scope

Hi there,

Doesn't the pulling images and pushing images is a pair of operations? Then why the pulling is in scope but pushing not? The same question for verifying and signing.

Tracking RunV Support in cri-o/ocid

Tracking RunV Support in cri-o/ocid

Recently we tried a little bit to play with cri-o to run runV. For those who are not familar with runV, runV is a OCI compatible runtime which leverages hypervisor as isolation sandbox. It is portable (multi-arch, multi-hypervisor) and lightweight like Linux container while secure as normal VM.

Comparing to remote runtime, cri-o + runV is straightforward and efficient. On the other hand, both cri-o and runV are young projects so there still need a lot of work to do. Since we have some experience on OCI runtime, we can help both with RunC and RunV integration. Let's begin the first step from here.

Pod/container managment

Generally, we reached agreement that runV integration will try best to match the runC workflow including conmon. Thus, a light-weighted proxy will be needed to meet requirements from conmon side. Besides that, there are some features are expected to happen in runV side:

  • runV refratoring: the existing containerd dependency will be refactor out (but we will keep the old code for test usage) while using file-lock based management (no daemon required, follow runC style mgmt)

  • stream: the host-side light-weighted proxy will handle io between hyperstart and client ( the workaround we mentioned on github for --console-socket of runC)

  • Pod level cgroups. For hypervisor based runtime, it always requires cpu/mem set before start the sandbox, we will get these value from Pod level cgroups (kubernetes/kubernetes#31546, and we have also implemented same thing in kubernetes-retired/frakti#46)

  • Infra container. RunV will try best to match cri-o's workflow, so a 'useless' infra container will be launched inside RunV sandbox, but please note most of infra container's responsibilities are actually done by RunV sandbox & hyperstart (the init process inside it), see: #168 (comment). We are also open for discussion to eliminate this infra Container, if possible.

runV CLI API

Mainly reference: opencontainers/runtime-spec#513

Others

  • Seprerate runV annotations from labels
  • Attach/exec/port-mapping in favor of k8s CRI streaming:

k8s CRI streaming issue & PR:
kubernetes/kubernetes#29579
kubernetes/kubernetes#35330

streaming design doc https://docs.google.com/document/d/1OE_QoInPlVCK9rMAx9aybRmgFiVjHpJCHI9LrfdNM_s/edit?usp=sharing

  • logging in favor of k8s CRI logging:

k8s CRI logging issue & proposal:
kubernetes/kubernetes#30709
kubernetes/kubernetes#33111

Network

  • CNI support for hypersior based runtime is under discussion: containernetworking/cni#251, since it still need time (at least two releases later), we are working on support bridge-based networking such as flannel. (claimed by: @heartlock)

/cc the authors of runV: @laijs @gao-feng

and /cc hyper + k8s community members who will help (and already helped) this work: @feiskyer @resouer @Crazykev @YaoZengzeng @xlgao-zju

`libseccomp` is required for compilation

If don't have libseccomp package locally, will have a compile error.
Maybe need a update to README to add hint of install libseccomp package since we have supported.

don't manage service files with `make {,un}install`

Personally I don't like makefiles that modify a bunch of system state if not necessary. In addition, it actually makes it not possible for us to use the stock Makefile on openSUSE -- in openSUSE we package service files in a very fun and wonderful way.

And especially bad is that make uninstall actually stops and disables services. While it's not a good idea to have a running service while removing files, this could cause other issues. In general IMO we should completely split out service handling so it's not in the makefile at all (or if it is, it's somewhere optional).

Using CFLAGS breaks compilation

If CFLAGS or LIBS is passed as a parameter to make the compilation will likely fail.

$ make CFLAGS="-g"
cc -g   -c -o conmon.o conmon.c
conmon.c:15:18: fatal error: glib.h: No such file or directory
 #include <glib.h>
                  ^
compilation terminated.
<builtin>: recipe for target 'conmon.o' failed
make: *** [conmon.o] Error 1

This happens because the new CFLAGS replaces the internal one where includes are being set.

Building issues

To be fair, I have an unconventional GOPATH setup (I have my home directory set to be my GOPATH then I put all projects under ~/src/project with a symlink at ~/src/github.com/* to ~/src. The idea being that I don't need to have the three-level-deep project hierarchy that is mandated by Go.

However, running make causes this interesting error:

go build -o ocid ./cmd/server/main.go
# command-line-arguments
cmd/server/main.go:62: cannot use service (type *server.Server) as type "ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".RuntimeServiceServer in argument to "ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".RegisterRuntimeServiceServer:
        *server.Server does not implement "ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".RuntimeServiceServer (wrong type for ContainerStatus method)
                have ContainerStatus("github.com/kubernetes-incubator/ocid/vendor/golang.org/x/net/context".Context, *"github.com/kubernetes-incubator/ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ContainerStatusRequest) (*"github.com/kubernetes-incubator/ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ContainerStatusResponse, error)
                want ContainerStatus("ocid/vendor/golang.org/x/net/context".Context, *"ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ContainerStatusRequest) (*"ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ContainerStatusResponse, error)
cmd/server/main.go:63: cannot use service (type *server.Server) as type "ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ImageServiceServer in argument to "ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".RegisterImageServiceServer:
        *server.Server does not implement "ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ImageServiceServer (wrong type for ImageStatus method)
                have ImageStatus("github.com/kubernetes-incubator/ocid/vendor/golang.org/x/net/context".Context, *"github.com/kubernetes-incubator/ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ImageStatusRequest) (*"github.com/kubernetes-incubator/ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ImageStatusResponse, error)
                want ImageStatus("ocid/vendor/golang.org/x/net/context".Context, *"ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ImageStatusRequest) (*"ocid/vendor/github.com/kubernetes/kubernetes/pkg/kubelet/api/v1alpha1/runtime".ImageStatusResponse, error)
make: *** [Makefile:6: ocid] Error 2
make  24.54s user 1.05s system 598% cpu 4.277 total

From what I can see, it looks like my Go install appears to be getting the wrong type for golang.org/x/net/context.Context and instead appends a github.com/kubernetes-incubator to the type name. Here's my Go version -- would Go 1.7 fix this problem?

% go version
go version go1.6.3 linux/amd64

OCID needs a yaml config file

I don't want OCID to make the same mistake that docker did, although even docker finally has a config file.

All commands on the CLI should be specified in a config file. The CLI options should override this. One problem I foresee is the Option handling for OCID does not support default boolean settings.

I would figure the OCID would read in the config file first, and then setup the CLI Options with the values defaulted to the config file, but if the boolean options can not take a default, then we won't be able to tell the difference between whether the option was set in the config or overridden by the CLI Option.

Comunications outside of github

I have registered #ocid on freenode and would like to setup a mailing list for communications about OCID.
I would like to setup a mailing list but not sure which site would be appropriate for this.

Storage/image handling should be configurable

Specifically what I mean by this is that we need to make sure that we make it easy for constrained distributions to disable and enforce certain features. Namely:

  • When the image code is merged, it's likely that we're going to be downloading images from the internet by default. However, it should be possible to disable this entirely using the configuration file. Certain deployments cannot start downloading images from the internet, and can only use the images packaged with the filesystem. This brings me to my second point ...
  • It must be possible for the image code to take OCI bundles from the filesystem, and treat the filesystem as the source. This will allow for distributions to explicitly manage updating of images (as well as all of the advanced package management offered by distributions). I'm not sure how we should make the image:tag mapping work, but I think having some cache of images that distributions can put files into might work.

There's probably a few other things, but since that code hasn't been merged I haven't thought of them yet. I've just opened this so we can keep track of it.

Currently runc needs to be built and installed and not installed using yum/dnf

In the setting started guide it is important to point out that currently runc and cri-o are being updated regularly with some dependencies. Intsalling runc packages will not guarentee that cri-o will run successfully at this time. Therefore it is best to install runc from a local build. Point to runc repo and the build/install instructions.

If you have runc installed already from a package it is likely that cri-o will not run currently despite installing runc from build. The already installed runc may be in a the PATH and get picked up. It is best to remove the runc package from your system and only have the current build installed.

Perhaps make install of runc could check if runc package is already installed as a package and prompt the installer to remove the previous package.

confusion over the ocid name?

After reading the updated repo description, I believe i understand the intent of this work but, am concerned that naming it only ocid will be both confusing and a possible infringement on the OCI trademarks. I see the following options:

  • find some examples of cases where this sort of naming based on a standard is common even when the standard offers formal certification (which is in the works now with the OCI).
  • go out to the OCI Trademark Board for clarification on their AUP for the name. - i.e. can we name this repo/tooling ocid but not claim "OCI Certified" or "OCI Compliant" without going through their (still nacent) certification process?
  • rename

Thoughts, dear maintainers? (@mrunalp @runcom @yujuhong @cyphar @mikebrow )

Add support for non-numeric user

We need to lookup the user in /etc/passwd in the rootfs it it exists and convert it into uid/gid to be set in runc config.json

getopt() used in a glib application

conmon uses getopt() even if it's a glib based application, and therefore could use the much more modern GOptionContext mechanism. Given that the dependency already exists, it'd be good to port the code. We'd gain --long-parameter support along with an automatic --help parameters and many other goodies.

About the CRI

Hi there,

I see the docs says cri-o implements the Kubelet Container Runtime Interface, does it mean the proposed one or something else? I didn't find a spec under the master branch of kubernetes/kubernetes .

Container ID does not exist error...

Repro:

Terminal 1:
sudo ocid --debug --runtime $(which runc)

Terminal 2:

sudo ocic pod create --config test/testdata/sandbox_config.json
sudo ocic ctr create --pod ca28917626e92b123e6ca09180b5f13feea012da925841584b0b255b55b4056b --config test/testdata/container_redis.json

Terminal 1 output:

INFO[0256] pod container state &{{ default-podsandbox1-0-infra running 8605 /var/lib/ocid/sandboxes/ca28917626e92b123e6ca09180b5f13feea012da925841584b0b255b55b4056b map[ocid/labels:{"group":"test"} ocid/log_path:. ocid/name:default-podsandbox1-0 owner:hmeng ocid/annotations:{"owner":"hmeng"} ocid/container_id:8d33f6c911e54f6e8091b67b8d62577c725b0b7b467f421a5b648c796da0717d ocid/container_name:default-podsandbox1-0-infra]} 2016-10-12 18:57:42.943754326 +0000 UTC 2016-10-12 13:57:43.013389411 -0500 CDT 0001-01-01 00:00:00 +0000 UTC 0} 
...
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 3.2.4 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 1
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io 

Container list shows two containers after creating one:

sudo ocic ctr list
ID: 629ef4181d4ceb0dcb54198fad1dba2f4dbbc5827ab0f91c332a5be1512d3861
Pod: ca28917626e92b123e6ca09180b5f13feea012da925841584b0b255b55b4056b
Status: CREATED
Created: 2016-10-12 14:01:45 -0500 CDT
ID: 8d33f6c911e54f6e8091b67b8d62577c725b0b7b467f421a5b648c796da0717d
Pod: ca28917626e92b123e6ca09180b5f13feea012da925841584b0b255b55b4056b
Status: RUNNING
Created: 2016-10-12 13:57:42 -0500 CDT

Can get status for the first container...:

sudo ocic ctr status --id 629ef4181d4ceb0dcb54198fad1dba2f4dbbc5827ab0f91c332a5be1512d3861
ID: 629ef4181d4ceb0dcb54198fad1dba2f4dbbc5827ab0f91c332a5be1512d3861
Status: RUNNING
Created: 2016-10-12 14:01:45 -0500 CDT
Started: 2016-10-12 14:03:51 -0500 CDT

FATA(L) Error - Can't get status for the second container:

sudo ocic ctr status --id 8d33f6c911e54f6e8091b67b8d62577c725b0b7b467f421a5b648c796da0717d
FATA[0000] Getting the status of the container failed: rpc error: code = 2 desc = container with ID starting with 8d33f6c911e54f6e8091b67b8d62577c725b0b7b467f421a5b648c796da0717d not found: ID does not exist 

runc list

sudo runc list
ID                                          PID         STATUS      BUNDLE                                                                                      CREATED
default-podsandbox1-0-infra                 8605        running     /var/lib/ocid/sandboxes/ca28917626e92b123e6ca09180b5f13feea012da925841584b0b255b55b4056b    2016-10-12T18:57:42.943754326Z
default-podsandbox1-0-podsandbox1-redis-0   8724        running     /var/lib/ocid/containers/629ef4181d4ceb0dcb54198fad1dba2f4dbbc5827ab0f91c332a5be1512d3861   2016-10-12T19:01:45.51429181Z

runc state:

sudo runc state default-podsandbox1-0-infra
{
  "ociVersion": "1.0.0-rc1-dev",
  "id": "default-podsandbox1-0-infra",
  "pid": 8605,
  "bundlePath": "/var/lib/ocid/sandboxes/ca28917626e92b123e6ca09180b5f13feea012da925841584b0b255b55b4056b",
  "rootfsPath": "/var/lib/ocid/graph/vfs/pause/rootfs",
  "status": "running",
  "created": "2016-10-12T18:57:42.943754326Z",
  "annotations": {
    "ocid/annotations": "{\"owner\":\"hmeng\"}",
    "ocid/container_id": "8d33f6c911e54f6e8091b67b8d62577c725b0b7b467f421a5b648c796da0717d",
    "ocid/container_name": "default-podsandbox1-0-infra",
    "ocid/labels": "{\"group\":\"test\"}",
    "ocid/log_path": ".",
    "ocid/name": "default-podsandbox1-0",
    "owner": "hmeng"
  }
}

sudo runc state default-podsandbox1-0-podsandbox1-redis-0
{
  "ociVersion": "1.0.0-rc1-dev",
  "id": "default-podsandbox1-0-podsandbox1-redis-0",
  "pid": 8724,
  "bundlePath": "/var/lib/ocid/containers/629ef4181d4ceb0dcb54198fad1dba2f4dbbc5827ab0f91c332a5be1512d3861",
  "rootfsPath": "/var/lib/ocid/containers/629ef4181d4ceb0dcb54198fad1dba2f4dbbc5827ab0f91c332a5be1512d3861/rootfs",
  "status": "running",
  "created": "2016-10-12T19:01:45.51429181Z",
  "annotations": {
    "ocid/labels": "{\"tier\":\"backend\"}",
    "ocid/log_path": "container.log",
    "ocid/name": "default-podsandbox1-0-podsandbox1-redis-0",
    "ocid/sandbox_id": "ca28917626e92b123e6ca09180b5f13feea012da925841584b0b255b55b4056b",
    "ocid/tty": "false",
    "pod": "podsandbox1"
  }
}

ocic pod remove does rm container in memory state

[root@gideon cri-o]# ocic pod list
ID: 9112c2bac547df317c99967de8b5f63b8aafb2158ad9b640e6163eae42e9a439
Status: READY
Created: 2016-09-29 17:32:31 -0600 MDT
[root@gideon cri-o]# ocic pod stop --id=9112
9112
[root@gideon cri-o]# ocic pod list
ID: 9112c2bac547df317c99967de8b5f63b8aafb2158ad9b640e6163eae42e9a439
Status: NOTREADY
Created: 2016-09-29 17:32:31 -0600 MDT
[root@gideon cri-o]# ocic pod remove --id=9112
9112
[root@gideon cri-o]# ocic pod list
[root@gideon cri-o]# ocic ctr list
FATA[0000] listing containers failed: rpc error: code = 2 desc = error getting container state for podsandbox1-redis: exit status 1: []

Recovery from out of sync sandboxes is not possible without removing files manually

Recovery from errors

Information about created pods is stored here /var/lib/ocid/sandboxes.
Currently, this information can get out of sync causing errors such as:

$ ocic pod list
FATA[0000] listing pod sandboxes failed: rpc error: code = 2 desc = error
getting container state for default-foo-0-infra: exit status 1: []

While we are working on resolving these issues you may need to remove sandboxes
manually:

$ cd /var/lib/ocid/sandboxes
$ ls
e00dea37fb97506c8de456fcd3b7d238e4176107ab84043e32319ee00ecb3e74
$ rm -drf e00dea37fb97506c8de456fcd3b7d238e4176107ab84043e32319ee00ecb3e74

Then restart your cri-o server.

Example of reproduction scenario:

  1. Create pod: ocic pod create --config test/testdata/sandbox_config.json
  2. reboot
  3. ocic pod list fails with above error.

Support restart

We should be able to support restarting ocid by scanning the pod/container states by reading their config.json and then updating the status of each container using UpdateStatus API.

Is the pause container necessary?

Currently we set up the pause container as k8s already does with Docker. However, it should be noted that (to my understanding) the only purpose of the pause container is to allow for k8s to mess around with the network setup of containers -- since we're using CNI natively it should be possible to not have this requirement.

Or is the pause container a requirement because we need to have some process inside the container as the stem cell for the pod, so we can attach additional containers to the pod? If so, we have to make sure that we do not automatically download the pause container from the internet (at SUSE we've had to dive deep into k8s to make sure that we can package all of the components explicitly). From my reading of the code, we don't do that at the moment though.

Enable cross compilation for ocic tool

The companion tool developed for testing ocid is generally useful for any CRI-based runtime, so it would be useful to provide cross-compiled versions for other platforms.

It seems the gimme tool used by Travis CI to manage the Go dependency can handle cross compilation quite easily, however, it may be easiest if we split the ocic tool into its own repository. Otherwise, I suppose we'll need to detect that we're cross compiling in our Makefile and then avoid trying to build the daemon.

the problem of `make install` when the docs is not built

there is still a problem which starts from #141.
although #141 was almost reverted, the problem still exists.

make clean   # remove the docs/*.[58]
make install # failed, stderr:
             # install -m 644  /usr/share/man/man8
             # install: missing destination file operand after ‘/usr/share/man/man8’
make install # success. (the above make install provided the docs/*.[58])

The reason is that $(wildcard docs/*.5) is calculated when the makefile is loaded, not when the install-shell-script is executed.

we should use $(wildcard docs/*.5.md) and calculate docs/%.5 from it.
maybe we should introduce make install-docs.

container_linux.go:247: starting container process caused "exec: \"/pause\": stat /pause: no such file or directory"

just make binaries and then try to run something starting ocid with paths to conmon/pause binaries /cc @cyphar

$ sudo ./ocid --pause /home/amurdaca/src/github.com/kubernetes-incubator/cri-o/pause/pause --runtime /usr/local/sbin/runc --debug --conmon $PWD/conmon/conmon & OCID_PID=$!
[1] 22136
$ INFO[0000] Starting reaper                              
E1006 18:10:39.956877   22145 ocicni.go:136] error updating cni config: No networks found in /etc/cni/net.d
DEBU[0000] sandboxes: map[]                             
DEBU[0000] containers: &{map[] {{0 0} 0 0 0 0}}         

$ ‹master› sudo ./ocic pod create --name pod1 --config test/testdata/sandbox_config.json
container_linux.go:247: starting container process caused "exec: \"/pause\": stat /pause: no such file or directory"
conmon: Failed to create container
FATA[0000] Creating the pod sandbox failed: rpc error: code = 2 desc = reading pid from init pipe: EOF 
INFO[0009] Signal received: child exited                
INFO[0009] Reaped process with pid 22195

Add Docker-based build option

Because compiling Go code is ... fun, it would be nice to provide a Dockerfile-based alternative to the standard make all build (similar to how runC allows you to build the project inside a Dockerfile). It would also be useful because it would allow packagers to ensure that they handle all of the build requirements for distributions.

repo description is wrong

a silly one - but the repo description is saying

Open Container Image Format integration to Kubernetes

while I think it should be

Open Container Initiative integration to Kubernetes

Add pod `--name` to ocic pod list?

What's the purpose of --name on the ocic create command?

I get that the id is automatically generated. But I figured if we're going to have a requested name it should at least show up on list & status.

We are not currently passing metadata back from the list command nor storing it?

type PodSandbox struct {
    // The id of the PodSandbox
    Id *string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
    // Metadata of the sandbox
    Metadata *PodSandboxMetadata `protobuf:"bytes,2,opt,name=metadata" json:"metadata,omitempty"`
    // The state of the PodSandbox
    State *PodSandBoxState `protobuf:"varint,3,opt,name=state,enum=runtime.PodSandBoxState" json:"state,omitempty"`
    // Creation timestamps of the sandbox
    CreatedAt *int64 `protobuf:"varint,4,opt,name=created_at,json=createdAt" json:"created_at,omitempty"`
    // The labels of the PodSandbox
    Labels           map[string]string `protobuf:"bytes,5,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
    XXX_unrecognized []byte            `json:"-"`
}

// PodSandboxStatus contains the status of the PodSandbox.
type PodSandboxStatus struct {
    // ID of the sandbox.
    Id *string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
    // Metadata of the sandbox.
    Metadata *PodSandboxMetadata `protobuf:"bytes,2,opt,name=metadata" json:"metadata,omitempty"`
    // State of the sandbox.
    State *PodSandBoxState `protobuf:"varint,3,opt,name=state,enum=runtime.PodSandBoxState" json:"state,omitempty"`
    // Creation timestamp of the sandbox
    CreatedAt *int64 `protobuf:"varint,4,opt,name=created_at,json=createdAt" json:"created_at,omitempty"`
    // Network contains network status if network is handled by the runtime.
    Network *PodSandboxNetworkStatus `protobuf:"bytes,5,opt,name=network" json:"network,omitempty"`
    // Linux specific status to a pod sandbox.
    Linux *LinuxPodSandboxStatus `protobuf:"bytes,6,opt,name=linux" json:"linux,omitempty"`
    // Labels are key value pairs that may be used to scope and select individual resources.
    Labels map[string]string `protobuf:"bytes,7,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
    // Annotations is an unstructured key value map that may be set by external
    // tools to store and retrieve arbitrary metadata.
    Annotations      map[string]string `protobuf:"bytes,8,rep,name=annotations" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
    XXX_unrecognized []byte            `json:"-"`
}

server/sandbox.go:

type sandbox struct {
    id          string
    name        string
    logDir      string
    labels      map[string]string
    annotations map[string]string
    containers  oci.Store
}

Will rkt be supported in the future?

Hi ,

Since cri-o is an OCI-based implementation of k8s CRI, and it seems rkt isn't compliant to OCI at present, so do we plan to support rkt in the future? If not , what's the opinion from coreOS guys about this project.

Personally I like rkt very much lol.

And how about the hyper/runv?

Btw, we would not integration the whole docker daemon into this project right?

I found that many kubernetes guys talking here, are they interested with this project as well?

issue running tests on rhel

tests are failing in RHEL pretty often because of Device or resource busy errors coming from devmapper and docker rm

cannot start a created container using the ID

Maybe I'm misunderstanding how the interface is meant to work, because I'm expecting it to kinda work like Docker/runC:

 % sudo ./ocic pod create --name sandbox0 --config test/testdata/sandbox_config.json
fe6a1bce4f27392b38f7c9066112a82ec39e4220e7ac6635873113168a725e2c
% sudo ./ocic container create --pod fe6a1bce4f27392b38f7c9066112a82ec39e4220e7ac6635873113168a725e2c --config test/testdata/container_config.json
aecc2d844d19f127c0a026e2325a6357e0a59a7c4b60130e63d4e4eab9ab3b0c
% sudo ./ocic container list
ID: default-sandbox0-0-container1-0
Pod: fe6a1bce4f27392b38f7c9066112a82ec39e4220e7ac6635873113168a725e2c
Status: CREATED
Created: 2016-10-07 15:01:02 +1100 AEDT
ID: default-sandbox0-0-infra
Pod: fe6a1bce4f27392b38f7c9066112a82ec39e4220e7ac6635873113168a725e2c
Status: RUNNING
Created: 2016-10-07 15:00:17 +1100 AEDT
% sudo ./ocic container start --id default-sandbox0-0-container1-0
FATA[0000] Starting the container failed: rpc error: code = 2 desc = container with ID starting with default-sandbox0-0-container1-0 not found: ID does not exist

Am I using the wrong ID or something? Or is this an actual bug?

containers states should be saved and restored

Right now if I create a pod sanbox named "test", stop ocid, restart ocid, and try to start a sandbox again with the same name ocid will of course tell me "test" already exists. This is fine but "./ocic pod remove --id test" tells me that "test" isn't in the server sandboxes - that's because sandboxes are checked on filesystem paths on start while on removal they're checked against the "server.state.sandboxes" field which is in memory and wihch is not restored on start.

Add integration tests

We need some basic integration tests that we can run per PR. bats could be used if there are no suggestions for anything better.
cc: @runcom

error updating status for container, when process was restarted

Earlier I stopped ocid process without stopping all containers or pods and restarted the process again

$ ocid --runtime $(which runc)
INFO[0000] Starting reaper                              
E1017 05:59:13.930932   15542 ocicni.go:136] error updating cni config: No networks found in /etc/cni/net.d
WARN[0000] error updating status for container c4b2a99b80de8757a3a131921084af0ea2c4c83835a6e2093f39050864f75d21: error getting container state for default-podsandbox1-0-infra: exit status 1: [] 
INFO[0000] Signal received: child exited                
INFO[0003] Signal received: child exited       
[SNIP]

and now when I try to get pods or containers info it fatals out as

$ sudo ocic container list
FATA[0000] listing containers failed: rpc error: code = 2 desc = error getting container state for default-podsandbox1-0-infra: exit status 1: [] 

$ sudo ocic pod list                                                                                                                                                                             
FATA[0000] listing pod sandboxes failed: rpc error: code = 2 desc = error getting container state for default-podsandbox1-0-infra: exit status 1: [] 

removing that container also fails

$ sudo ocic container remove --id c4b2a99b80de8757a3a131921084af0ea2c4c83835a6e2093f39050864f75d21
FATA[0000] Removing the container failed: rpc error: code = 2 desc = failed to update container state: error getting container state for default-podsandbox1-0-infra: exit status 1: [] 

versions:

$ ocic -v
ocic version 0.0.1

$ runc -v
runc version 1.0.0-rc2
commit: 3abefdff18bc201199c5dfd0e91e941cb4c61376
spec: 1.0.0-rc2-dev

$ uname -a
Linux fedora 4.5.5-300.fc24.x86_64 #1 SMP Thu May 19 13:05:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Cannot start sandbox pod

Attempting to create a pod with the following command
sudo ./ocic pod create --config test/testdata/sandbox_config.json
As is (almost) identical to the command in the README produces the following error on the client side:
Creating the pod sandbox failed: rpc error: code = 2 desc = reading pid from init pipe: EOF
I believe the error originates when the Kubernetes api is called at cmd/client/sandbox.go:190 but I wasn't able to trace it past there.

Integrating storage

Taken from mrunalp/ocid#22, part of mrunalp/ocid#6.

Depends on containers/image#63 but we could actually start playing with this. This also depends on containers/image#41 (wrt to have containers/image to receive an auth configuration to pull from registries), but not strictly, I'll work on that asap.

The way we get the image from the CRI API is hardcoding docker as the reference and docker registry as the transport, this is being worked on in kubernetes/kubernetes#7203.

Pseudo code re-arranged:

import (
    "github.com/containers/image/copy"
    "github.com/containers/storage"
)

func PullImage(imageName) error {
    // imageName - as come from K8s API contains NAME+TAGORDIGEST
    if storage.IsAlreadyPulled(imageName) {
        return nil
    }

    src := // create an new ImageSource, probably from docker
    dest := // create a new OCIStorageDestination

    err := copy.Image(ctx, policyCtx, src, dest, options)
    if err != nil {
        return err
    }

    return nil
}

// then the image stored above is used to create a new container
func CreateContainerRootfs(id) error {
    imageMetadata = storage.GetMetadata(imageIDorName)
    storage.CreateContainer(imageID=..., optionalName=..., optionalID=..., metadata=...)
    // [ ... ]
}

@mrunalp @mtrmac @nalind PTAL

issue running cri-o on rawhide

On a stock rawhide I got some issues before resolving them and run cri-o. One of the was:

# container_linux.go:240: creating new parent process caused "container_linux.go:1245: running lstat on namespace path \"/proc/0/ns/ipc\" caused \"lstat /proc/0/ns/ipc: no such file or directory\""
# conmon: Failed to create container

I don't remember how this got fixed - @mrunalp @cyphar PTAL

conmon: Handle logs

Today we are moving bytes back and forth between std streams and master pty. The stdout/stderr should be replaced by log files and the std stream code should move to the client on attach.

move distro-specific files to contrib/

There's an increase of distribution-specific and other such files in the root of the project. We need to clean stuff up and put it in contrib/systemd, contrib/rpm and so on.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.