Git Product home page Git Product logo

google / gvisor Goto Github PK

View Code? Open in Web Editor NEW
15.0K 306.0 1.2K 1.4 GB

Application Kernel for Containers

Home Page: https://gvisor.dev

License: Apache License 2.0

Python 0.10% Shell 0.32% Go 75.81% Assembly 0.83% C++ 18.43% C 0.43% Ruby 0.01% Dockerfile 0.05% Makefile 0.21% JavaScript 0.01% HTML 0.12% Starlark 3.63% Handlebars 0.01% SCSS 0.04% Cuda 0.02% Rust 0.01%
sandbox containers oci docker kubernetes linux kernel

gvisor's Introduction

gVisor

Build status Issue reviver CodeQL gVisor chat code search

What is gVisor?

gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system surface. It includes an Open Container Initiative (OCI) runtime called runsc that provides an isolation boundary between the application and the host kernel. The runsc runtime integrates with Docker and Kubernetes, making it simple to run sandboxed containers.

Why does gVisor exist?

Containers are not a sandbox. While containers have revolutionized how we develop, package, and deploy applications, using them to run untrusted or potentially malicious code without additional isolation is not a good idea. While using a single, shared kernel allows for efficiency and performance gains, it also means that container escape is possible with a single vulnerability.

gVisor is an application kernel for containers. It limits the host kernel surface accessible to the application while still giving the application access to all the features it expects. Unlike most kernels, gVisor does not assume or require a fixed set of physical resources; instead, it leverages existing host kernel functionality and runs as a normal process. In other words, gVisor implements Linux by way of Linux.

gVisor should not be confused with technologies and tools to harden containers against external threats, provide additional integrity checks, or limit the scope of access for a service. One should always be careful about what data is made available to a container.

Documentation

User documentation and technical architecture, including quick start guides, can be found at gvisor.dev.

Installing from source

gVisor builds on x86_64 and ARM64. Other architectures may become available in the future.

For the purposes of these instructions, bazel and other build dependencies are wrapped in a build container. It is possible to use bazel directly, or type make help for standard targets.

Requirements

Make sure the following dependencies are installed:

Building

Build and install the runsc binary:

mkdir -p bin
make copy TARGETS=runsc DESTINATION=bin/
sudo cp ./bin/runsc /usr/local/bin

Testing

To run standard test suites, you can use:

make unit-tests
make tests

To run specific tests, you can specify the target:

make test TARGETS="//runsc:version_test"

Using go get

This project uses bazel to build and manage dependencies. A synthetic go branch is maintained that is compatible with standard go tooling for convenience.

For example, to build and install runsc directly from this branch:

echo "module runsc" > go.mod
GO111MODULE=on go get gvisor.dev/gvisor/runsc@go
CGO_ENABLED=0 GO111MODULE=on sudo -E go build -o /usr/local/bin/runsc gvisor.dev/gvisor/runsc

Subsequently, you can build and install the shim binary for containerd:

GO111MODULE=on sudo -E go build -o /usr/local/bin/containerd-shim-runsc-v1 gvisor.dev/gvisor/shim

Note that this branch is supported in a best effort capacity, and direct development on this branch is not supported. Development should occur on the master branch, which is then reflected into the go branch.

Community & Governance

See GOVERNANCE.md for project governance information.

The gvisor-users mailing list and gvisor-dev mailing list are good starting points for questions and discussion.

Security Policy

See SECURITY.md.

Contributing

See Contributing.md.

gvisor's People

Contributors

amscanne avatar arthurpi avatar avagin avatar ayushr2 avatar bgaff avatar etienneperot avatar eyalsoha avatar fvoznika avatar ghananigans avatar googleanivia avatar gvisor-bot avatar hbhasker avatar iangudger avatar ianlewis avatar iyermi avatar kevingc avatar kongoshuu avatar konstantin-s-bogom avatar manninglucas avatar milantracy avatar mrahatm avatar nixprime avatar nlacasse avatar nybidari avatar prattmic avatar sudo-sturbia avatar tamird avatar zeling avatar zhaozhongn avatar zkoopmans avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gvisor's Issues

Feature request: runsc --version

Need a kind of version number, probably in the form of git commit.
This would be great when reporting bugs back to the project.

mysql cannot work on visor

when docker run with runtime=runsc
#groupadd mysql
groupadd: /etc/group.101: lock file already used
groupadd: cannot lock /etc/group; try again later.

so i cannot test mysql on gvisor

Failed to build in docker container centos:7

[root@c3f456871892 gvisor]# bazel build runsc
Analyzing: target //runsc:runsc (7 packages loaded)
INFO: Analysed target //runsc:runsc (171 packages loaded).
INFO: Found 1 target...
ERROR: /root/.cache/bazel/_bazel_root/8efa14feb181cf4b8ae1b0ddb1bac0e9/external/com_google_protobuf/BUILD:64:1: C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed (Exit 1)
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
Target //runsc:runsc failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 79.678s, Critical Path: 0.09s
INFO: 7 processes: 6 local, 1 processwrapper-sandbox.
FAILED: Build did NOT complete successfully

collect2: fatal error: cannot find 'ld'

On Linux the build fails:

$ bazel build --verbose_failures runsc
INFO: Analysed target //runsc:runsc (0 packages loaded).
INFO: Found 1 target...
ERROR: /home/rw/homerw_old/work/gvisor/vdso/BUILD:8:1: Executing genrule //vdso:vdso failed (Exit 1): bash failed: error executing command
(cd /home/rw/.cache/bazel/_bazel_rw/5842f54b5499609ce9d6a4a0b7803cf7/execroot/main &&
exec env -
PATH=/opt/make/bin/:/opt/make/bin/:/home/rw/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin
/bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; /usr/bin/gcc -I. -O2 -std=c++11 -fPIC -fuse-ld=gold -m64 -shared -nostdlib -Wl,-soname=linux-vdso.so.1 -Wl,--hash-style=sysv -Wl,--no-undefined -Wl,-Bsymbolic -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096 -Wl,-Tvdso/vdso.lds -o bazel-out/k8-fastbuild/genfiles/vdso/vdso.so vdso/vdso.cc vdso/vdso_time.cc && bazel-out/host/bin/vdso/check_vdso --check-data --vdso bazel-out/k8-fastbuild/genfiles/vdso/vdso.so ')

Use --sandbox_debug to see verbose messages from the sandbox
collect2: fatal error: cannot find 'ld'
compilation terminated.
Target //runsc:runsc failed to build
INFO: Elapsed time: 0.622s, Critical Path: 0.41s
FAILED: Build did NOT complete successfully


Rethinking how runsc is built

Currently, the entire build system depends on Bazel. Although Bazel entails many advantages, it renders the project to be nearly impossible to build in certain environments. I have been working on creating a package for openSUSE since yesterday, tried different versions of Bazel to pre-fetch all dependencies into a cache, but all attempts failed. It seems there is no way to cache all dependencies into a separate directory, and re-use that in a different build.

The core problem is that the build environments of (nearly) all distributions do not allow to fetch dependencies via the network, which is an integral part of how Bazel works. Maybe we can find a way to satisfy both, using Bazel and making gvisor more distribution friendly?

zookeeper start failed

I start the zookeeper container with the dockerhub official image, but it turns out repeated again and again.
root@node23:/test# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5348d914acd1 zookeeper "/docker-entrypoint.…" 2 minutes ago Restarting (127) 19 seconds ago some-zookeeper

while docker logs shows that:
root@node23:/test# docker logs 5348d914acd1
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Error loading shared library libjli.so: No such file or directory (needed by /usr/lib/jvm/java-1.8-openjdk/jre/bin/java)
Error relocating /usr/lib/jvm/java-1.8-openjdk/jre/bin/java: JLI_Launch: symbol not found

docker version: 17.12.1-ce
OS version: Ubuntu 16.04
runtime: runsc

Can someone give me a hand? thanks.

Google calender

Before filling an issue, please consult our FAQ: https://github.com/google/gvisor#faq--known-issues
Also check that the issue hasn't been reported before.

If you have a question, please email [email protected] rather than filing a bug.

If you believe you've found a security issue, please email [email protected] rather than filing a bug.

If this is your first time compiling or running gVisor, please make sure that your system meets the minimum requirements: https://github.com/google/gvisor#requirements

For all other issues, please attach debug logs. To get debug logs, follow the instructions here: https://github.com/google/gvisor#debugging

Other useful information to include is:

  • docker version or docker info if more relevant
  • uname -a
  • git describe
  • Full docker command you ran
  • Detailed repro steps

The nightly build binaries contain commit IDs that do not exist on Github

As far as I understand, they are 2 different git trees.
The first one is on Github, github.com/google/gvisor. Another one is on Google Source, where Gerrit is located. Both trees are public, have the same sequence of commits, but different commit IDs.

Nightly build binaries seem to build from Google Source's tree, not from Github. To me, I don't have problem with this, as I know where it find it. But I'm not sure if it might confuse other users, when they might be trying to find a commit that does not exist on Github.

What do you think about this?

docker run --runtime=runsc hello-world failed

I can run hello-world with runc, but runsc failed.

$sudo docker run --runtime=runsc hello-world
error reading spec: error unmarshaling spec from file "/var/run/docker/libcontainerd/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/config.json": json: cannot unmarshal array into Go struct field Process.capabilities of type specs.LinuxCapabilities
{"ociVersion":"1.0.0-rc2-dev","platform":{"os":"linux","arch":"amd64"},"process":{"consoleSize":{"height":0,"width":0},"user":{"uid":0,"gid":0},"args":["/hello"],"env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","HOSTNAME=bab9e5f44481"],"cwd":"/","capabilities":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE","CAP_SYS_RESOURCE","CAP_SYS_MODULE","CAP_SYS_PTRACE","CAP_SYS_PACCT","CAP_NET_ADMIN","CAP_SYS_ADMIN"]},"root":{"path":"/home/docker/overlay/60442221f3ecdcf8f4fd2db4ebcda9d13c8b705c84d4410f3740c7c9fa1411a8/merged"},"hostname":"bab9e5f44481","mounts":[{"destination":"/proc","type":"proc","source":"proc","options":["nosuid","noexec","nodev"]},{"destination":"/dev","type":"tmpfs","source":"tmpfs","options":["nosuid","strictatime","mode=755"]},{"destination":"/dev/pts","type":"devpts","source":"devpts","options":["nosuid","noexec","newinstance","ptmxmode=0666","mode=0620","gid=5"]},{"destination":"/sys","type":"sysfs","source":"sysfs","options":["nosuid","noexec","nodev","ro"]},{"destination":"/dev/mqueue","type":"mqueue","source":"mqueue","options":["nosuid","noexec","nodev"]},{"destination":"/sys/fs/cgroup","type":"cgroup","source":"cgroup","options":["ro","nosuid","noexec","nodev"]},{"destination":"/etc/resolv.conf","type":"bind","source":"/home/docker/containers/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/resolv.conf","options":["rbind","rprivate"]},{"destination":"/etc/hostname","type":"bind","source":"/home/docker/containers/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/hostname","options":["rbind","rprivate"]},{"destination":"/etc/hosts","type":"bind","source":"/home/docker/containers/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/hosts","options":["rbind","rprivate"]},{"destination":"/dev/shm","type":"bind","source":"/home/docker/containers/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/shm","options":["rbind","rprivate"]}],"hooks":{"prestart":[{"path":"/usr/bin/dockerd-1.12.6","args":["libnetwork-setkey","bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a","2813e5d5164ba3526568ffd397e2074766a65f2290e686165c22087394efecd1"]}]},"annotations":{"__BlkBufferWriteBps":"0","__BlkBufferWriteSwitch":"0","__BlkFileLevelSwitch":"0","__BlkFileThrottlePath":"","__BlkMetaWriteTps":"0","__ali_network_alinet":"libnetwork-setkey bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a 2813e5d5164ba3526568ffd397e2074766a65f2290e686165c22087394efecd1","__ali_network_bridge":"docker0","__ali_network_endpoint_id":"713d54c1415be0b263534fe558d5e80314baa72f6a6b7f0cc6f83697d8e8445d","__ali_network_gateway":"192.168.5.1","__ali_network_mac":"02:42:c0:a8:05:02","__ali_network_prefix":"24","__ali_network_type":"bridge","__cput_bvt_warp_ns":"-2","__intel_rdt.l3_cbm":"","__memory_extra_in_bytes":"0","__memory_force_empty_ctl":"-1","__memory_wmark_ratio":"0"},"linux":{"resources":{"devices":[{"allow":false,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":5,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":3,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":9,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":8,"access":"rwm"},{"allow":true,"type":"c","major":5,"minor":0,"access":"rwm"},{"allow":true,"type":"c","major":5,"minor":1,"access":"rwm"},{"allow":false,"type":"c","major":10,"minor":229,"access":"rwm"}],"disableOOMKiller":false,"oomScoreAdj":0,"memory":{"swappiness":18446744073709551615},"cpu":{"ScheLatSwitch":null},"pids":{"limit":0},"blockIO":{"blkioWeight":0,"ThrottleBufferWriteBpsDevice":null,"ThrottleDeviceIdleTime":null,"ThrottleDeviceLatencyTarget":null}},"cgroupsPath":"/docker/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a","namespaces":[{"type":"mount"},{"type":"network"},{"type":"uts"},{"type":"pid"},{"type":"ipc"},{"type":"cgroup"}],"devices":[{"path":"/dev/fuse","type":"c","major":10,"minor":229,"fileMode":438,"uid":0,"gid":0}],"maskedPaths":["/proc/kcore","/proc/latency_stats","/proc/timer_list","/proc/timer_stats","/proc/sched_debug"],"readonlyPaths":["/proc/asound","/proc/bus","/proc/fs","/proc/irq","/proc/sys","/proc/sysrq-trigger"]}}

docker: Error response from daemon: containerd: container not started.

runsc will crash after a SIGINT signal

Hello guys,
I find an interesting behavior of gVisor.
When I ran a docker in docker run --runtime=runsc -it ubuntu /bin/bash,
it crashed by following steps:

[root@container gvisor]# docker run --runtime=runsc -it ubuntu /bin/bash
root@313125f6e36d:/# ^C
root@313125f6e36d:/# ls
[root@container gvisor]#

This problem is not occurred at anytime but frequently.

OS: CentOS 7
docker version: 18.03.1-ce
kernel version: Linux 4.16.7-1.el7.elrepo.x86_64 x86_64 x86_64 x86_64 GNU/Linux

support for nvidia-docker GPU container sandboxing

In order to expose GPUs in K8s, you'll have to install nvidia-docker as an additional container runtime. A lot of people surely would love to run sandboxed containers with GPU support though.

Do you guys see an easy way to layer one over the other, maybe?

how gVisor intercepts syscalls for container

gVisor intercepts all system calls made by the application
From the doc, gVisor intercept container syscall in host user space. I know Sysdig needs to install a kernel module for this purpose. Could you tell me more technical details for gVisor's syscall interceptions?

Issues Building

I've tried it on macOS and linux, get the same response:

Linux:

root@609afc8eda04:/home/gvisor# bazel build runsc
Starting local Bazel server and connecting to it...
.................................................................
INFO: Analysed target //runsc:runsc (173 packages loaded).
INFO: Found 1 target...
ERROR: /home/gvisor/vdso/BUILD:8:1: Executing genrule //vdso:vdso failed (Exit 127)
/usr/bin/env: 'python': No such file or directory
Target //runsc:runsc failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 115.477s, Critical Path: 9.28s
INFO: 191 processes: 10 local, 181 processwrapper-sandbox.
FAILED: Build did NOT complete successfully

macOS:

o85196-0809:gvisor qrpike$ bazel build runsc
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
.............
INFO: Analysed target //runsc:runsc (174 packages loaded).
INFO: Found 1 target...
INFO: From Compiling external/com_google_protobuf/src/google/protobuf/compiler/js/embed.cc [for host]:
external/com_google_protobuf/src/google/protobuf/compiler/js/embed.cc:37:12: warning: unused variable 'output_file' [-Wunused-const-variable]
const char output_file[] = "well_known_types_embed.cc";
           ^
1 warning generated.
ERROR: /Users/qrpike/code/gvisor/vdso/BUILD:8:1: Executing genrule //vdso:vdso failed (Exit 1)
clang: error: invalid linker name in argument '-fuse-ld=gold'
Target //runsc:runsc failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 75.854s, Critical Path: 8.52s
INFO: 111 processes: 101 darwin-sandbox, 10 local.
FAILED: Build did NOT complete successfully

I have developer tools, xcode, etc installed.

bazel build requires python2

When compiling with bazel build runsc, it assumes that the default python version is 2. When using version 3 one can expect error messages such as bytes-like object is required, not 'str' or CRITICAL:root:VDSO contains relocations: b'\nThere are no relocations in this file.\n'.

Changing default python version to python2 fixes this issue.

Memory layout and vulnerability to side-channel attacks

Q1: Is the memory layout of the guest same as if it runs directly on the host kernel? Specifically, if the underlying host kernel maps all physical memory to virtual address space of each process (the way which made Meltdown possible), will it preserve such mapping in the gVisor container?

Q2: Assuming the underlying host kernel and CPU is vulnerable side-channel attacks (same as or similar to Meltdown/Spectre), would one guest in one gVisor container be able to attack another guest in another gVisor container on the same host? and why so?

Do real revalidation of "remote" file systems

Re this item in the FAQ:

I can’t see a file copied with docker cp.

For performance reasons, gVisor caches directory contents, and therefore it may not realize a new file was copied to a given directory. To invalidate the cache and force a refresh, create a file under the directory in question and list the contents again.

DirentOperations (akin to dentry_operations in Linux; embedded in MountOperations, which are about to be renamed MountSourceOperations, and are akin to super_operations) currently have Revalidate and Keep methods.

In Linux, revalidation will stat the inode to see if anything has changed. If it has, it re-walks to that node.

In gVisor, revalidation always returns false for host files, so the sandbox will notice the new file either when it falls out of the Dirent cache or when you readdir the containing directory. For Gofer files, revalidation is controlled by whether Dirents are cached or not, which should also not be the case.

We should be following Linux more closely here, which would also resolve the above problem. For Gofers, we should provide a configuration option to say "don't revalidate" that one can use as a performance optimization if they know only gVisor has access to that mount and nothing else can add/change/remove files.

Missing netstack license

Many files under pkg/ have a header:

// Copyright 2016 The Netstack Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.

However there is no LICENSE file (nor any similarly named files) in those subdirectories and the top-level LICENSE has an Apache 2.0 license.

I think you need to include a copy of https://github.com/google/netstack/blob/master/LICENSE?

runsc runtime not working with centos 7.5

Hi

I was trying to change docker runtime to runsc and spawn a hellow world container .but got some error.

docker: Error response from daemon: OCI runtime create failed: /usr/local/bin/runsc did not terminate sucessfully: unknown.

kindly suggest

below are the infra and software versions

[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
[root@localhost ~]# uname -a
Linux localhost.localdomain 3.10.0-862.2.3.el7.x86_64 #1 SMP Wed May 9 18:05:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Docker:

[root@localhost ~]# docker info
Containers: 39
Running: 22
Paused: 0
Stopped: 17
Images: 35
Server Version: 18.03.1-ce
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc runsc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-862.2.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.786GiB
Name: localhost.localdomain
ID: VUGP:HS3G:AFXL:MO42:B277:5EE5:GFRZ:CTUR:BXZJ:LWPX:C574:BQQN
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

[root@localhost runsc]# cat /etc/docker/daemon.json
{
"runtimes": {
"runsc": {
"path": "/usr/local/bin/runsc",
"runtimeArgs": [
"--debug-log-dir=/tmp/runsc",
"--debug",
"--strace"
]
}
}
}

error.txt

docker returns additional bad error messages when binary is not found

$ docker run  -ti --runtime=runsc ubuntu:14.04 /bin/not-found

produces:

error running sandbox: failed to create init process: no such file or directory
docker: Error response from daemon: OCI runtime start failed: unable to retrieve OCI runtime error (invalid character '\x00' looking for beginning of value): /usr/local/bin/runsc did not terminate sucessfully: error starting sandbox: error starting sandbox: EOF
: unknown.
docker version
Client:
 Version:	18.04.0-ce
 API version:	1.37
 Go version:	go1.9.4
 Git commit:	3d479c0
 Built:	Tue Apr 10 18:20:32 2018
 OS/Arch:	linux/amd64
 Experimental:	false
 Orchestrator:	swarm

Server:
 Engine:
  Version:	18.04.0-ce
  API version:	1.37 (minimum version 1.12)
  Go version:	go1.9.4
  Git commit:	3d479c0
  Built:	Tue Apr 10 18:18:40 2018
  OS/Arch:	linux/amd64
  Experimental:	true

Docker daemon logs:

ime="2018-05-04T11:51:52.407775210+02:00" level=error msg="Handler for POST /v1.37/containers/12ceaf56c05d89dcd89dc3453eae43a11eb5c84934124b1d65a8b07e82cfe8de/start returned error: OCI runtime start failed: unable to retrieve OCI runtime error (invalid character '\\x00' looking for beginning of value): /usr/local/bin/runsc did not terminate sucessfully: error starting sandbox: error starting sandbox: EOF\n: unknown"

runc error code:

docker run  -ti  ubuntu:14.04 /bin/not-found
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/bin/not-found\": stat /bin/not-found: no such file or directory": unknown.

Check for getcwd when size is zero but addr is not NULL

I added more checks for getcwd. From the man page:

       EINVAL The size argument is zero and buf is not a null pointer.

       EINVAL getwd(): buf is NULL.

       ENAMETOOLONG
              getwd(): The size of the null-terminated absolute pathname
              string exceeds PATH_MAX bytes.

The first EINVAL check seems to be OK to me, but I'm not sure if the second EINVAL and the third ENAMETOOLONG checks are necessary?

Memory mapping error on KVM platform

Command:

/usr/local/bin/runsc --debug-log-dir=/tmp/runsc --debug --strace --platform=kvm --root /var/run/docker/runtime-runsc/moby --log /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477/log.json --log-format json state 7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477

Output (truncated, ignoring other goroutines):

runtime: address space conflict: map(0xc41ffe0000) = 0xc4419e1000 (err 0)
fatal error: runtime: address space conflict

runtime stack:
runtime.throw(0xb35314, 0x1f)
	GOROOT/src/runtime/panic.go:616 +0x81
runtime.sysMap(0xc41ffe0000, 0x8000, 0x100000, 0xfc92d8)
	GOROOT/src/runtime/mem_linux.go:220 +0x1ef
runtime.(*mheap).mapBits(0xfb0800, 0xc420400000)
	GOROOT/src/runtime/mbitmap.go:160 +0xa9
runtime.(*mheap).setArenaUsed(0xfb0800, 0xc420400000, 0xfb0801)
	GOROOT/src/runtime/mheap.go:545 +0x35
runtime.(*mheap).sysAlloc(0xfb0800, 0x100000, 0x7f33ee743c58)
	GOROOT/src/runtime/malloc.go:473 +0x12e
runtime.(*mheap).grow(0xfb0800, 0x1, 0x0)
	GOROOT/src/runtime/mheap.go:907 +0x60
runtime.(*mheap).allocSpanLocked(0xfb0800, 0x1, 0xfc9288, 0x7f33ee743c58)
	GOROOT/src/runtime/mheap.go:820 +0x301
runtime.(*mheap).alloc_m(0xfb0800, 0x1, 0xfa003c, 0x7f33ee743c58)
	GOROOT/src/runtime/mheap.go:686 +0x118
runtime.(*mheap).alloc.func1()
	GOROOT/src/runtime/mheap.go:753 +0x4d
runtime.(*mheap).alloc(0xfb0800, 0x1, 0x7f33ee01003c, 0x7f33ee743c58)
	GOROOT/src/runtime/mheap.go:752 +0x8a
runtime.(*mcentral).grow(0xfb2a50, 0x0)
	GOROOT/src/runtime/mcentral.go:232 +0x94
runtime.(*mcentral).cacheSpan(0xfb2a50, 0x7f33ee743c58)
	GOROOT/src/runtime/mcentral.go:106 +0x2e4
runtime.(*mcache).refill(0x7f33ee7976c8, 0xc42019a33c)
	GOROOT/src/runtime/mcache.go:123 +0x9c
runtime.(*mcache).nextFree.func1()
	GOROOT/src/runtime/malloc.go:556 +0x32
runtime.systemstack(0x0)
	bazel-out/k8-fastbuild/bin/external/io_bazel_rules_go/linux_amd64_pure_stripped/stdlib~/src/runtime/asm_amd64.s:409 +0x79
runtime.mstart()
	GOROOT/src/runtime/proc.go:1175

Start log seems to report issue similar to that reported in #11:

D0503 18:05:45.842655    4186 x:0] Signal sandbox "7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477"
D0503 18:05:45.842731    4186 x:0] Start sandbox "7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477", pid: 4165
D0503 18:05:45.842756    4186 x:0] Executing hook {Path:/proc/1827/exe Args:[libnetwork-setkey 7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477 dea7bb6eba892c0866b8401cdc7d8cc10889b740f7b53fd650aa22f707b51453] Env:[] Timeout:<nil>}, state: {Version:1.0.1-dev ID:7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477 Status:created Pid:4165 Bundle:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477 Annotations:map[]}
D0503 18:05:45.885852    4186 x:0] Destroy sandbox "7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477"
D0503 18:05:45.885953    4186 x:0] Killing sandbox "7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477"
D0503 18:05:45.885990    4186 x:0] Killing gofer for sandbox "7ab3f04204712b08eabcf7cd36a2a8b84ee3dca31a9fb736a54ed04636097477"
W0503 18:05:45.886227    4186 x:0] FATAL ERROR: error starting sandbox: failure executing hook "/proc/1827/exe", err: exit status 1
stdout: 
stderr: time="2018-05-03T18:05:45+02:00" level=fatal msg="no such file or directory"

Content of /etc/docker/daemon.json:

{
    "runtimes": {
        "runsc": {
            "path": "/usr/local/bin/runsc",
	    "runtimeArgs": [
                "--debug-log-dir=/tmp/runsc",
                "--debug",
                "--strace",
		"--platform=kvm"
            ]
        }
    }
}

/proc/cpuinfo (truncated):

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i5-2557M CPU @ 1.70GHz
stepping	: 7
microcode	: 0x23
cpu MHz		: 797.407
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 1
cpu cores	: 2
apicid		: 3
initial apicid	: 3
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts
bugs		: cpu_meltdown spectre_v1 spectre_v2
bogomips	: 3390.19
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

Pre-built binaries?

Nice project! Would it be possible to have pre-built binaries generated to decrease time needed to get up and running?

build failed on Mac

build failed on Mac

Starting local Bazel server and connecting to it...
............
INFO: Analysed target //runsc:runsc (174 packages loaded).
INFO: Found 1 target...
INFO: From Compiling external/com_google_protobuf/src/google/protobuf/compiler/js/embed.cc [for host]:
external/com_google_protobuf/src/google/protobuf/compiler/js/embed.cc:37:12: warning: unused variable 'output_file' [-Wunused-const-variable]
const char output_file[] = "well_known_types_embed.cc";
           ^
1 warning generated.
ERROR: /Users/[REDACTED]/code/gvisor/vdso/BUILD:8:1: Executing genrule //vdso:vdso failed (Exit 1)
clang: error: invalid linker name in argument '-fuse-ld=gold'
Target //runsc:runsc failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 84.488s, Critical Path: 9.48s
INFO: 125 processes: 115 darwin-sandbox, 10 local.
FAILED: Build did NOT complete successfully
$bazel build runsc --verbose_failures --sandbox_debug
INFO: Analysed target //runsc:runsc (0 packages loaded).
INFO: Found 1 target...
ERROR: /Users/[REDACTED]/code/gvisor/vdso/BUILD:8:1: Executing genrule //vdso:vdso failed (Exit 1): sandbox-exec failed: error executing command
  (cd /private/var/tmp/_bazel_[REDACTED]/2aa46d382f3da4c861c496bba67d8af8/execroot/__main__ && \
  exec env - \
    PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin:/bin:/Users/[REDACTED]/code/flutter/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/opt/X11/bin:/usr/local/opt/go/libexec/bin:/Users/[REDACTED]/Library/Python/2.7/bin:/Users/[REDACTED]/code/gocode/bin:/Users/[REDACTED]/Library/Android/sdk/tools:/Users/[REDACTED]/Library/Android/sdk/platform-tools:/Users/[REDACTED]/.mix:/Users/[REDACTED]/.mix/escripts:/Users/[REDACTED]/.pub-cache/bin/ \
    TMPDIR=/var/folders/65/cn602n1d21z_cn3cwnjmy5lwrnkmr0/T/ \
  /usr/bin/sandbox-exec -f /private/var/tmp/_bazel_[REDACTED]/2aa46d382f3da4c861c496bba67d8af8/sandbox/565523233562725973/sandbox.sb /private/var/tmp/_bazel_[REDACTED]/2aa46d382f3da4c861c496bba67d8af8/execroot/__main__/_bin/process-wrapper '--timeout=0' '--kill_delay=15' /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; external/local_config_cc/cc_wrapper.sh  -I. -O2 -std=c++11 -fPIC -fuse-ld=gold -m64 -shared -nostdlib -Wl,-soname=linux-vdso.so.1 -Wl,--hash-style=sysv -Wl,--no-undefined -Wl,-Bsymbolic -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096 -Wl,-Tvdso/vdso.lds -o bazel-out/darwin-fastbuild/genfiles/vdso/vdso.so vdso/vdso.cc vdso/vdso_time.cc && bazel-out/host/bin/vdso/check_vdso --check-data --vdso bazel-out/darwin-fastbuild/genfiles/vdso/vdso.so ')
clang: error: invalid linker name in argument '-fuse-ld=gold'
Target //runsc:runsc failed to build
INFO: Elapsed time: 0.548s, Critical Path: 0.23s
INFO: 0 processes.
FAILED: Build did NOT complete successfully

RAW sockets are not supported

SOCK_RAW is not supported. Most ping implementations depend on it.

Ping sockets are supported, which are used by newer versions of ping. For example, Ubuntu 18.04 includes a sufficiently modern version of ping (make sure you are using the most recent version by running docker pull ubuntu).

KVM platform doesn't seem to work

First of all, what a cool project! I'm trying to use the kvm platform backend and running into an issue. I turned logging on and get the following:

I0502 09:38:41.822182    2663 x:0] ***************************
I0502 09:38:41.822305    2663 x:0] Args: [/usr/local/bin/runsc --network=host --debug-log-dir=/tmp/runsc --debug --strace --platform=kvm --root /var/run/docker/runtime-runsc/moby --log /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05/log.json --log-format json start 1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05]
I0502 09:38:41.822353    2663 x:0] PID: 2663
I0502 09:38:41.822379    2663 x:0] UID: 0, GID: 0
I0502 09:38:41.822402    2663 x:0] Configuration:
I0502 09:38:41.822432    2663 x:0]              RootDir: /var/run/docker/runtime-runsc/moby
I0502 09:38:41.822456    2663 x:0]              Platform: kvm
I0502 09:38:41.822483    2663 x:0]              FileAccess: proxy, overlay: false
I0502 09:38:41.822518    2663 x:0]              Network: host, logging: false
I0502 09:38:41.822546    2663 x:0]              Strace: true, max size: 1024, syscalls: []
I0502 09:38:41.822571    2663 x:0] ***************************
D0502 09:38:41.822599    2663 x:0] Load sandbox "/var/run/docker/runtime-runsc/moby" "1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05"
D0502 09:38:41.824406    2663 x:0] Signal sandbox "1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05"
D0502 09:38:41.824445    2663 x:0] Start sandbox "1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05", pid: 2639
D0502 09:38:41.824476    2663 x:0] Executing hook {Path:/usr/bin/dockerd Args:[libnetwork-setkey 1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05 39ab48b69d8788a6b7c56f380259da7713ca1247b463b0f7317b03767a59c2bc] Env:[] Timeout:<nil>}, state: {Version:1.0.1-dev ID:1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05 Status:created Pid:2639 Bundle:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05 Annotations:map[]}
D0502 09:38:41.851177    2663 x:0] Destroy sandbox "1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05"
D0502 09:38:41.851331    2663 x:0] Killing sandbox "1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05"
D0502 09:38:41.851369    2663 x:0] Killing gofer for sandbox "1570d7168cf1a1ef25052cf0433701779f9d7fa11a98c241f88b25bf1c5c8b05"
W0502 09:38:41.852246    2663 x:0] FATAL ERROR: error starting sandbox: failure executing hook "/usr/bin/dockerd", err: exit status 1
stdout:
stderr: time="2018-05-02T09:38:41-07:00" level=fatal msg="no such file or directory"

The command run was docker run --runtime=runsc hello-world

Docker version: Docker version 17.12.0-ce, build c97c6d6

I guess it's trying to execute the hooks but the fs namespace has already been unbound?

Support AppArmor profiles?

Twitter conversation context:

I understand. I'm looking at protecting the container from external attacks, not container escapes (I can use gVisor to protect against this). I can do this on Ubuntu by running by Docker with --security-opt apparmor=<my_custom_profile> Will gVisor support this? -- @securityfoo

Supporting AppArmor profiles doesn't actually seem that hard:

I don't know if this is something we want to support, but I'm going to leave all the bits here and let someone else decide.

First ping/UDP packet to destination fails with EHOSTDOWN

The first ping or UDP packet sent to a destination will fail with EHOSTDOWN (host is down).

if err == tcpip.ErrWouldBlock {
// Link address needs to be resolved. Resolution was triggered the
// background. Better luck next time.
//
// TODO: queue up the request and send after link address
// is resolved.
route.RemoveWaker(waker)
return 0, tcpip.ErrNoLinkAddress
}

if err == tcpip.ErrWouldBlock {
// Link address needs to be resolved. Resolution was triggered the background.
// Better luck next time.
//
// TODO: queue up the request and send after link address
// is resolved.
route.RemoveWaker(waker)
return 0, tcpip.ErrNoLinkAddress
}

Failed to run portainer

When running Portainer (portainer/portainer) with volume binding, the container cannot be started.

  • docker version or docker info if more relevant
    Docker 18.03.1

  • uname -a

Linux ip-172-31-23-9 4.4.0-1049-aws #58-Ubuntu SMP Fri Jan 12 23:17:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Full docker command you ran
    docker run -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer

Error log from Docker daemon:

returned error: OCI runtime create failed: unable to retrieve OCI runtime error (invalid character 'm' after object key)

Log files:
runsc.log.20180504-113618.395832.boot.txt
runsc.log.20180504-113618.393880.gofer.txt
runsc.log.20180504-113618.383749.create.txt

Idea: host-OS-independence with alternate syscall interface

I have an idea for this that builds on an idea I've had for a long time but have never had (and probably will not have) the time to implement.

The idea is to pseudo-virtualize Linux binaries by rebuilding them to call a function in user-space instead of executing CPU-level syscalls. This would require apps to be rebuilt from source, making it effectively a different "hardware architecture" (e.g. x86_64-uvirt or something) from standard Linux, but would otherwise be fully compatible. This would eliminate the ptrace requirement to make it portable and also might be faster.

It seems like it would be borderline trivial to do this with gVisor.

This would open up the possibility of embedding Linux hosts inside anything including desktop and mobile apps or server processes for other OSes like Windows. You could port, desktop-ify, or even mobile-ify any complex Linux application.

groupadd fails: lock file already used

Testing takes like 2 seconds:

  • docker run --runtime=runsc -it ubuntu /bin/bash
  • root@123456abcdefg:/# test
  • Returns with the following:
groupadd: /etc/group.6: lock file already used
groupadd: cannot lock /etc/group; try again later.

Nothing seems to show up in the logs when trying with debugging. I confirmed that this is happening both with and without KVM, and only with gVisor.

Install failure

➜  gvisor git:(master) bazel build runsc
INFO: Analysed target //runsc:runsc (0 packages loaded).
INFO: Found 1 target...
INFO: From Linking external/com_google_protobuf/libprotobuf_lite.a [for host]:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host
/bin/external/com_google_protobuf/libprotobuf_lite.a(arenastring.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host
/bin/external/com_google_protobuf/libprotobuf_lite.a(atomicops_internals_x86_msvc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host
/bin/external/com_google_protobuf/libprotobuf_lite.a(io_win32.o) has no symbols
INFO: From Linking external/com_google_protobuf/libprotobuf.a [for host]:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host
/bin/external/com_google_protobuf/libprotobuf.a(gzip_stream.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host
/bin/external/com_google_protobuf/libprotobuf.a(error_listener.o) has no symbols
ERROR: /Users/thomas/Desktop/gggg/gvisor/vdso/BUILD:8:1: Executing genrule //vdso:vdso failed (Exit 1)
clang: error: invalid linker name in argument '-fuse-ld=gold'
Target //runsc:runsc failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 39.244s, Critical Path: 11.96s
INFO: 31 processes, darwin-sandbox.
FAILED: Build did NOT complete successfully

OCI runtime exec error: flag provided but not defined: -console-socket

gvisor1:2ehal1nT:G7HJX100# docker exec -ti 123 bash
OCI runtime exec failed: /var/lib/docker/runtimes/runsc did not terminate sucessfully: flag provided but not defined: -console-socket
                                                                                                                                     exec [command options] <container-id> <command> [comman
d options] || --process process.json <container-id>
                       
                       
                                                   Where "<container-id>" is the name for the instance of the container and
                                                                                                                           "<command>" is the command to be executed in the container.
                                                                                                                                                                                      "<comm
and>" can't be empty unless a "-process" flag provided.
gvisor1:2ehal1nT:G7HJX100# docker version
Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:17:20 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   9ee9f40
  Built:        Thu Apr 26 07:15:30 2018
  OS/Arch:      linux/amd64
  Experimental: false
gvisor1:2ehal1nT:G7HJX100# uname -a
Linux gvisor1 4.4.0-122-generic #146-Ubuntu SMP Mon Apr 23 15:34:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Code to repro:

https://github.com/ianmiell/shutit-gvisor/blob/master/build.py

sched_setattr: Function not implemented

Hey there,
wanted to test some realtime scheduling and tried to measure some timings and overhead caused by gvisor.

Using the debug flag on gvisor, i noticed that i get:

write(0x4, 0x7facec000af0 "sched_setattr: Function not implemented\n", 0x28)

therefore i cannot run and test my application. Any advices?

docker command : docker run --runtime=runsc --cap-add SYS_NICE -i -d -t simple-interpolation
docker version: Docker version 18.03.1-ce
uname -a : 4.9.84-rt62 SMP PREEMPT RT Tue May 8 18:11:18 CEST 2018 x86_64 x86_64 x86_64 GNU/Linux

kurento start failed

I replaced the runtime with runsc, and try to start offical kurento media server, but it failed with follow logs output:
root@node23:/test# docker logs 5a9b7c5b1f1a

  • set -e
  • '[' -n '' ']'
  • '[' -n '' -a -n '' ']'
  • cat /etc/hosts
  • sed /::1/d
  • tee /etc/hosts
    tee: /etc/hosts: Invalid argument

my docker environment:
root@node23:/test# docker version
Client:
Version: 17.12.1-ce
API version: 1.35
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:17:40 2018
OS/Arch: linux/amd64

Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:16:13 2018
OS/Arch: linux/amd64
Experimental: false

Besides, I didn't see any obvious error log info with the runsc --debug flag open. Only just start with I0509 and D0509 prints.

I'm looking forward to your help, thanks in advance.

Cgroups access?

Does this runtime require cgroups to be enabled at the kernel level? I am trying to run containers on a linux build that has very little support for anything.

Thanks,

Nginx does not run

Depending on the configuration, you may see nginx fail with the error:

ioctl(FIOASYNC) failed while spawning "worker process" (25: Inappropriate ioctl for device)

Support for FIOASYNC is in progress, but it’s not available yet. For now, add
the line below to /etc/nginx/nginx.conf:

master_process off;

Failed to start hello world on Ubuntu 16.04

$ sudo docker run --runtime=runsc hello-world
flag provided but not defined: -console
create [flags] <container id> - create a secure container
  -bundle string
    	path to the root of the bundle directory, defaults to the current directory
  -console-socket string
    	path to an AF_UNIX socket which will receive a file descriptor referencing the master end of the console's pseudoterminal
  -pid-file string
    	filename that the sandbox pid will be written to
docker: Error response from daemon: containerd: container not started.

Invite to gvisor developers at the (HPC) SEA conference 2019

Folks,

Sorry for reaching out via the issue tracker, that's the best way I can think of on a Friday PM regarding how to reach you. By the way, your project looks really interesting. Feel free to close this issue and contact me by email in reply: https://staff.ucar.edu/users/ddvento


You may have heard of The first containers in HPC Symposium at the SEA conference 2018 which has been a great success. As first event, we limited it to container technology geeks but for the second one in 2019 we will invite application developers (i.e. containers users) also. As a matter of fact, we intended the event to be a one-off, but given to popular demand, there will be a second symposium and it'll be made a staple of our conference.

I have a small survey (about when to have the event) which I will give to you privately if you are interested in participating.

error: undefined reference to '__stack_chk_fail' at build

When performing bazel build runsc under Arch the following error occurs: error: undefined reference to '__stack_chk_fail'.

One can fix this issue by adding "-fno-stack-protector " + in the cmd args in vdso/BUILD file (between -shared and -nostdlib).

EDIT: disabled stack protector instead of enabling it.

how gvisor trap to syscall handler using vmx

Hello all,
Just curious about the tech details on how gvisor trap to syscall handler using vmx,
it grateful if you can also figure the source code file and functions which finishing such tasks.

Thanks !

by the way, I ask the question couple days ago in google group, but nobody answer it.
so i post it here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.