Git Product home page Git Product logo

rkt's Introduction

⚠️ End of project ⚠️

development

This project has ended, and all development/maintenance activities have halted.

As it is free software, people are free and welcome to fork and develop the codebase on their own. However, to avoid any confusion, the original repository is archived and we recommend any further fork/development to proceed with an explicit rename and rebranding first.

We encourage all interested parties to mirror any relevant bits as we can't actively guarantee their existence in the future.


rkt - the pod-native container engine

godoc Build Status (Travis) Build Status (SemaphoreCI) Go Report Card

rkt Logo

rkt (pronounced like a "rocket") is a CLI for running application containers on Linux. rkt is designed to be secure, composable, and standards-based.

Some of rkt's key features and goals include:

Project status

The rkt v1.x series provides command line user interface and on-disk data structures stability for external development. Any major changes to those primary areas will be clearly communicated, and a formal deprecation process conducted for any retired features.

Check out the roadmap for more details on the future of rkt.

Trying out rkt

To get started quickly using rkt for the first time, start with the "trying out rkt" document. Also check rkt support on your Linux distribution. For an end-to-end example of building an application from scratch and running it with rkt, check out the getting started guide.

Getting help with rkt

There are a number of different avenues for seeking help and communicating with the rkt community:

  • For bugs and feature requests (including documentation!), file an issue
  • For general discussion about both using and developing rkt, join the rkt-dev mailing list
  • For real-time discussion, join us on IRC: #rkt-dev on freenode.org
  • For more details on rkt development plans, check out the GitHub milestones

Most discussion about rkt development happens on GitHub via issues and pull requests. The rkt developers also host a semi-regular community sync meeting open to the public. This sync usually features demos, updates on the roadmap, and time for anyone from the community to ask questions of the developers or share users stories with others. For more details, including how to join and recordings of previous syncs, see the sync doc on Google Docs.

Contributing to rkt

rkt is an open source project and contributions are gladly welcomed! See the Hacking Guide for more information on how to build and work on rkt. See CONTRIBUTING for details on submitting patches and the contribution workflow.

Licensing

Unless otherwise noted, all code in the rkt repository is licensed under the Apache 2.0 license. Some portions of the codebase are derived from other projects under different licenses; the appropriate information can be found in the header of those source files, as applicable.

Security disclosure

If you suspect you have found a security vulnerability in rkt, please do not file a GitHub issue, but instead email [email protected] with the full details, including steps to reproduce the issue. CoreOS is currently the primary sponsor of rkt development, and all reports are thoroughly investigated by CoreOS engineers. For more information, see the CoreOS security disclosure page.

Known issues

Check the troubleshooting document.

Related Links

Integrations and Production Users

rkt's People

Contributors

0xax avatar alban avatar blixtra avatar euank avatar eyakubovich avatar fabiokung avatar glevand avatar iaguis avatar jellonek avatar jonboulle avatar jzelinskie avatar kelseyhightower avatar krnowak avatar lucab avatar matthaias avatar monstermunchkin avatar philips avatar polvi avatar poonai avatar ppalucki avatar robszumski avatar s-urbaniak avatar sgotti avatar squall0gd avatar squeed avatar steveej avatar tesujimath avatar tmrts avatar vcaputo avatar yifan-gu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rkt's Issues

rkt: lifecycle management

We need to define how rkt knows how all of the processes running in a given stage1 have been destroyed and that the root filesystem can be cleaned up.

rename afs to aci

App Container Filesystem = afs
App Container (fileset) Image = aci

Do %s%afs%aci%g

Question: Metadata URL with EC2 instances

How does the metadata service work with EC2? EC2 offers a metadata service on the same URI. I'd probably still want to access the EC2 metadata service from a container.

wanted: xz and bzip2 compressors in Go

Currently ACI's can be compressed with xz or bzip2 but actool build can only do gzip compression because Go libraries don't exist for xz or bzip2.

This feature would require implementing these specs in pure-go. Shelling out or linking to a C library won't fix this bug.

rkt: run by app name

Use the store to find all of the apps that are needed for the run and setup the stage2 filesystems automatically.

cannot build on OS X

rkt $ ./build 
Building actool...
Building ACE validator...
Building init (stage1)...
Packaging init (stage1)...
usage: mktemp [-d] [-q] [-t prefix] [-u] template ...
       mktemp [-d] [-q] [-u] -t prefix 

On Linux systems mktemp does not require a template, but OS X does. Should we even bother to fix this?

rkt: app datastore

We need a datastore that can hold on to the app name secondary indexes for an image and deploy a given image to a new directory on request.

rkt: create trivial networking solution

To get something going, rkt should setup a container on a private network plugged into a linux-bridge. rkt should allocate a bridge if one is not already present (rkt0), assign it a well known IP network (e.g. 10.111.0.0/16) and give out random IP addresses to individual containers.

This requires putting in systemd-networkd into the stage1 rootfs and generating a .network file.

discovery: walk up the tree

For example if the user has example.com/project/subproject and we first try example.com/project/subproject but don't find a meta tag then try example.com/project then try example.com.

rkt: integrate with the metadata service

  • Make the metadata service socket activate on some socket
  • Teach rkt how to register a UUID with the service (perhaps not attempting if the /var/run/rktmetaadmin.sock unix socket doesn't exist so we can fall back)
  • ????

stage1: build and host somewhere

Build and host a stage1 on a public HTTP server so we can share it and not build it everytime. We can use the storage at developer.storage.core-os.net for now.

rkt: initial run working

Take an already mounted and setup stage1 and stage2 filesystem and run it. The basic steps involve:

  • Generating unit files from the container manifest
  • Placing unit files into the stage1 filesystem
  • Exec'ing systemd nspawn

This should all work as a service file and be cleaned up.

store: use group permissions

It would be nice to run rkt fetch as a non-privileged user and then launch the container with zero outbound connections. We need to implement something like git's core.sharedRepository/setgid thing.

Metadata service should provide the mapping of network ports

One key requirement for a container to publish or register itself with another service that is outside or independent of the container architecture is to provide your public IP address(es) and port number(s).

I don't see any provision for those mappings to be available via the metadata service.

For instance, Amazon provides /network/interfaces/macs/mac/public-ipv4s as a metadata resource (aws docs) for their public facing elastic-ip addresses.

metadata: networking prototype

Prototype and document how to setup a container with systemd-nspawn that can talk to and uniquely identify a container by its incoming IP address.

Add a Dockerfile for building rocket

The following command works:

docker run -v $src:/opt/rocket -i -t google/golang /bin/bash -c "apt-get install -y cpio squashfs-tools realpath && cd /opt/rocket && go get github.com/jteeuwen/go-bindata/... && ./build"

rkt: figure out general networking solution

We need to figure out the production solution for rkt networking:

  • What networking options we will support. E.g. linux bridge, veth, ovs, macvlan, ipvlan, host?
  • How do we assign IP addresses (ourselves, host systemd-dhcpd, network DHCP server)?
  • How do we integrate with flannel?
  • How pluggable is the solution (Docker has no shortage of requests for pluggable networking solution)?

Spec first read feedback

Collected notes as I read through. Sorry for the length. If any of these become worth discussing, we can fork to different topics.

This changes the established naming from what Docker calls a container to what Kubernetes calls a pod. I think that changing the meaning of "container" at this point might be detrimental to the overall comprehension of the system. I would propose to keep container to mean what you call app-container and define a different word for a set of containers.

Example use case talks about "puts them into its local on-disk cache" and "extracts two copies". This sets off immediate alarm bells for me - disk IO is the single most contended resource, in our experience. To be successful, this spec really must be implementable with a minimum of disk IO. For example, I should be able to mount a pre-built cache of images and satisfy container run requests. That's not to say that disk IO can not satisfy the spec, but it must not be a requirement.

SIGTERM should be just one kind of termination signal

Files in an images "must maintain all of their original properties" - can this include capabilities?

One thing Docker does well is differentiate WHAT to run from HOW to run it. Does this address that idea? There's some overlap in lifecycle stuff, but some other things like resources are clearly dependent on how a container will be used. How about command line flags? Being able to take a pre-built container, such as ubuntu, and run things in it without creating and pushing a new container is powerful.

That rkt is not a daemon is very similar to what we were pushing with lmctfy. However, there are things that (currently) are hard to do without any daemon - an example we run up against is prioritized OOM handling. This spec should carefully consider the strata of the overall system and how some things cross between them.

Volumes are under-specified. Can I mount them at different places in each app? Can I not mount them in some apps?

Network: Does each APP get a network or each container? I'm a bit unclear on the naming and distinction between container and app. Does rkt provide "out of the box" network at all? Docker's got this part pretty well, even if it is terribly slow.

AC_APP_NAME - what does "the entrypoint that this process was defined from" mean?

Isolators: Who do I have to beg to NOT reinvent this, and instead use something derived from LMCTFY? We have captured YEARS of development in LMCTFY's concepts. For example, exposing CPU shares is a mess. If there are things about LMCTFY's structures that don't work, let's iron those out instead..

You say "if the ACE is looking for example.com/reduce-worker-1.0.0 it will request: https://example.com/reduce-worker?ac-discovery=1". What principle lead to some piece of software knowing where to split that string?

You make some remark about /etc/hosts being assumed present and parsed by libc. Does this mean you won't provide /etc/hosts, /etc/resolv.conf, etc? That seems like a bad idea.

meanYou say config-drives "make assumptions about filesystems which are not appropriate for all environments" - can you explain? Why is it not sufficient to say that config can also be found in a volume, if the user so prefers? The host environment should be able to provide that.

"name": "example.com/reduce-worker-1.0.0" - why is version just jammed in there? Or are you trying to say "version is opaque" and if you want it, you have to embed it in the name?

Can we define user and group as name strings, not as numerics (and why are they string numbers?)

Why do you need a private network flag? This hasn't really answered how networking will work. This has been a huge PITA with Docker because EVERYONE has a different idea of what they want in networks. You can NOT support flags for all of them, so I'd argue to support none of them and instead define that as a plugin, and ship some examples of network plugins but leave it open.

app-container: add a library to convert docker image to ACI

It would be interesting to make the app-container spec inter-op with docker registries. After implementing fileset dependencies (#140) we can write a tool that converts layers into filesets. The pkg should have an entry point where the user can start with a docker registry URL and tag (e.g. quay.io/coreos/etcd:v0.4.6) then:

  • Talk to the registry API and fetch the layers
  • Download the layers and convert them to filesets (How do we label these filesets? name={url},layer={layerid})
  • For the root layer use the version as the final tag

Ownership, Governance, TM, etc

I hate to be "that guy" but I would like to know what the plan is here in terms of ownership, governance, and the trademark, although I'm skeptical you could actually trademark "rocket" :P

The post announcing this project cited some divergences in philosophy from Docker (the project and the company). It would be good to know if this project intends to be a pure community effort or if this is just CoreOS disagreeing with Docker.

It would be great to see a contribution policy that laid out how people get involved in the project and contribute to decision making and how contentious issues like the ones that have coalesced in Docker that lead to this project would be resolved. If not it would seem like people are just trading Docker (the company) for CoreOS (the company).

stage1: build a static compiled version

We have two annoying options to get around this:

  1. Chroot into the stage1 which makes doing bind mounts more annoying

  2. Statically compile the stage1 so it doesn't depend on the host. Also patch out the stupid logic to check /run/systemd/system and talk to dbus to get the machine-id.

/cc @eyakubovich Where is that machine-id thing you were talking about? Can you add some more info on this?

actool?

Hello,

I got to

Validate the application manifest

$ actool validate manifest.json
manifest.json: valid AppManifest

in the getting started guide... and I got:

No command 'actool' found, did you mean:
 Command 'atool' from package 'atool' (universe)
 Command 'autool' from package 'nas-bin' (universe)
actool: command not found

Feedback and a use case

I've read the spec. And I like Rocket! Thank you! I already see how it would solve a couple of problems I had with Docker.

E.g. I wasted quite some time trying to setup a private Docker Registry when it was still new. It was nearly impossible to do so a year ago. In contrast, your idea of registry is so simple and transparent.

Another issue I had with Docker is all-ports-are-closed-by-default and always-private-network philosophy. It made it much more difficult to experiment. And made some things nearly impossible to do. E.g. SIP/RTP. In contrast, you specified firewall to be optional.

(I don't know it these problem are solved in current Docker. I've stopped following it.)

By the way, if you are interested, here's what I used Docker for:

Notice how clumsy it was to configure RTP port range.

(I'll experiment with Rocket in a few days.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.