Dockerfiles and assets for building Docker containers
mesosphere-backup / docker-containers Goto Github PK
View Code? Open in Web Editor NEWDockerfiles and assets for building Docker containers
Dockerfiles and assets for building Docker containers
There's no image for mesos 0.24.1. It'd be cool if there was one:)
Thanks in advance!
https://hub.docker.com/r/mesosphere/mesos-slave-dind/
is not maintained up to date. In the mean time, mesosphere/mesos-slave does not include docker binary which causes a bund of problems. In order to make host's /usr/bin/docker accessible from within the slave docker one needs to bind-mount /lib64 and other system directories which imposes compatibility constrains on the host OS. mesos-slave-dind is really the way to go, but it has to be maintained up to date. In fact, it looks like the Dockerfile used to build it has been removed in github:
https://github.com/mesosphere/docker-containers/tree/master/mesos-slave-dind
I looked at the Dockerfile, it seems that the latest mesos-slave image already installed docker-engine when building the image. So I tried to run it without adding mapping to host's docker,
docker run -d --net=host \
--privileged \
-e MESOS_PORT=5051 \
-e MESOS_MASTER=zk://[zk_host]/mesos \
-e MESOS_SWITCH_USER=0 \
-e MESOS_CONTAINERIZERS=docker,mesos \
-e MESOS_LOG_DIR=/var/log/mesos \
-e MESOS_WORK_DIR=/var/tmp/mesos \
-v /var/log/mesos:/var/log/mesos \
-v "$(pwd)/tmp/mesos:/var/tmp/mesos" \
--name slave mesosphere/mesos-slave:1.0.11.0.1-2.0.93.ubuntu1404
but I got following error
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0922 21:23:24.697176 1 main.cpp:243] Build: 2016-08-26 23:00:07 by ubuntu
I0922 21:23:24.697485 1 main.cpp:244] Version: 1.0.1
I0922 21:23:24.697528 1 main.cpp:247] Git tag: 1.0.1
I0922 21:23:24.697577 1 main.cpp:251] Git SHA: 3611eb0b7eea8d144e9b2e840e0ba16f2f659ee3
I0922 21:23:24.698982 1 logging.cpp:194] INFO level logging started!
SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.30: No such file or directory
I0922 21:23:24.806118 1 containerizer.cpp:196] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni
I0922 21:23:24.812340 1 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
2016-09-22 21:23:24,814:1(0x7f310209c700):ZOO_INFO@log_env@726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2016-09-22 21:23:24,814:1(0x7f310209c700):ZOO_INFO@log_env@730: Client environment:host.name=iZuf659w5fbowj61aj370mZ
2016-09-22 21:23:24,814:1(0x7f310209c700):ZOO_INFO@log_env@737: Client environment:os.name=Linux
2016-09-22 21:23:24,814:1(0x7f310209c700):ZOO_INFO@log_env@738: Client environment:os.arch=4.3.6-coreos
2016-09-22 21:23:24,814:1(0x7f310209c700):ZOO_INFO@log_env@739: Client environment:os.version=#2 SMP Tue May 3 21:48:31 UTC 2016
2016-09-22 21:23:24,814:1(0x7f310209c700):ZOO_INFO@log_env@747: Client environment:user.name=(null)
2016-09-22 21:23:24,814:1(0x7f310209c700):ZOO_INFO@log_env@755: Client environment:user.home=/root
2016-09-22 21:23:24,814:1(0x7f310209c700):ZOO_INFO@log_env@767: Client environment:user.dir=/
2016-09-22 21:23:24,814:1(0x7f310209c700):ZOO_INFO@zookeeper_init@800: Initiating client connection, host=[zk_host]:2181 sessionTimeout=10000 watcher=0x7f310b5f16d0 sessionId=0 sessionPasswd=<null> context=0x7f30d8000930 flags=0
I0922 21:23:24.815431 1 main.cpp:434] Starting Mesos agent
I0922 21:23:24.816761 12 slave.cpp:198] Agent started on 1)@[slave_host]:5051
I0922 21:23:24.816844 12 slave.cpp:199] Flags at startup: --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="docker,mesos" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="false" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher_dir="/usr/libexec/mesos" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --master="zk://139.196.217.136:2181/mesos" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="1secs" --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="false" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/var/tmp/mesos"
I0922 21:23:24.817601 12 slave.cpp:519] Agent resources: cpus(*):1; mem(*):498; disk(*):32200; ports(*):[31000-32000]
I0922 21:23:24.817693 12 slave.cpp:527] Agent attributes: [ ]
I0922 21:23:24.817754 12 slave.cpp:532] Agent hostname: iZuf659w5fbowj61aj370mZ
I0922 21:23:24.820996 15 state.cpp:57] Recovering state from '/var/tmp/mesos/meta'
I0922 21:23:24.826390 9 status_update_manager.cpp:200] Recovering status update manager
I0922 21:23:24.826737 14 docker.cpp:775] Recovering Docker containers
I0922 21:23:24.826841 11 containerizer.cpp:522] Recovering containerizer
I0922 21:23:24.830144 9 provisioner.cpp:253] Provisioner recovery complete
2016-09-22 21:23:24,830:1(0x7f30fe05a700):ZOO_INFO@check_events@1728: initiated connection to server [139.196.217.136:2181]
2016-09-22 21:23:24,832:1(0x7f30fe05a700):ZOO_INFO@check_events@1775: session establishment complete on server [zk_host:2181], sessionId=0x157533ffd4b000d, negotiated timeout=10000
I0922 21:23:24.833214 16 group.cpp:349] Group process (group(1)@[slave_host]:5051) connected to ZooKeeper
I0922 21:23:24.833277 16 group.cpp:837] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0922 21:23:24.833339 16 group.cpp:427] Trying to create path '/mesos' in ZooKeeper
I0922 21:23:24.835343 16 detector.cpp:152] Detected a new leader: (id='1')
I0922 21:23:24.835597 16 group.cpp:706] Trying to get '/mesos/json.info_0000000001' in ZooKeeper
I0922 21:23:24.836369 16 zookeeper.cpp:259] A new leading master (UPID=master@[master_host]:5050) is detected
Failed to perform recovery: Collect failed: Failed to run 'docker -H unix:///var/run/docker.sock ps -a': exited with status 1; stderr='Cannot connect to the Docker daemon. Is the docker daemon running on this host?
'
To remedy this do as follows:
Step 1: rm -f /var/tmp/mesos/meta/slaves/latest
This ensures agent doesn't recover old live executors.
Step 2: Restart the agent.
The docker-engine is installed but I still cannot connect to the Docker daemon, it seems like root user has to be added to docker user group. Is this the problem? Or do I miss something?
Latest mesos-master: 1.0.0-1.0.52.rc1.ubuntu1404 gives the following error when I run it:
mesos-master: error while loading shared libraries: libevent_openssl-2.0.so.5: cannot open shared object file: No such file or directory
Hi
Today you have only base image based on ubuntu (also : https://registry.hub.docker.com/u/libmesos/ubuntu/)
FROM ubuntu:14.04
MAINTAINER Mesosphere [email protected]
We need the same for centos, can you please provide
Thanks.
If you're using Ubuntu 14.04 and the latest stable docker-engine from apt.dockerproject.org, the following error occurs when using the mesos-slave launch snippet from this repo:
Failed to create a containerizer: Could not create DockerContainerizer: Failed to create docker: Failed to get docker version: Failed to execute 'docker -H unix:///var/run/docker.sock --version': exited with status 127
It turns out that two library bind mounts are missing from the mesos-slave launch snippet:
-v /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0 \
-v /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
Here is my docker version:
~$ docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:47:50 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:47:50 2016
OS/Arch: linux/amd64
And here is what I am observing:
~$ docker run --net=host --privileged \
-e MESOS_PORT=5051
-e MESOS_MASTER=zk://$ZK_HOST:2181/mesos
-e MESOS_SWITCH_USER=0
-e MESOS_LOG_DIR=/var/log/mesos
-e MESOS_WORK_DIR=/var/tmp/mesos
-e MESOS_CONTAINERIZERS=docker,mesos
-v "$(pwd)/log/mesos:/var/log/mesos"
-v "$(pwd)/tmp/mesos:/var/tmp/mesos"
-v /var/run/docker.sock:/var/run/docker.sock
-v /cgroup:/cgroup
-v /sys:/sys
-v /usr/local/bin/docker:/usr/local/bin/docker
-v $(which docker):/bin/docker
mesosphere/mesos-slave:0.28.0-2.0.16.ubuntu1404
I0705 22:48:21.905763 1 logging.cpp:188] INFO level logging started!
I0705 22:48:21.906471 1 main.cpp:223] Build: 2016-03-17 17:45:11 by root
I0705 22:48:21.906512 1 main.cpp:225] Version: 0.28.0
I0705 22:48:21.906522 1 main.cpp:228] Git tag: 0.28.0
I0705 22:48:21.906532 1 main.cpp:232] Git SHA: 961edbd82e691a619a4c171a7aadc9c32957fa73
Failed to create a containerizer: Could not create DockerContainerizer: Failed to create docker: Failed to get docker version: Failed to execute 'docker -H unix:///var/run/docker.sock --version': exited with status 127
Any idea what could be wrong?
The Mesos Master and Slave README here is out of date. The Slave and Master configuration reference links are redirected from open.mesosphere.com and drops you to dcos.io. I looked through the DCOS docs, and I can't seem to find anything about how to configure them. I've had to reference http://mesos.apache.org/documentation/latest/configuration/ to figure out what commands to pass the Slaves for things like roles. Hoping this can get amended, thanks.
Local persistent volumes of type root fails to mount if mesos-agent is run in the docker container. I assume this is because the mesos-agent process is unaware that it's running inside a container. The local volume mount does not take place on the host machine.
Is there a way to allow the mesos local persistent root volumes to work when running mesos-agent inside the Docker container?
Environment
Marathon: Version 1.7.50
Mesos: mesosphere/mesos:1.7.0
Since all executors run in the same container as mesos-slave, they all die during upgrade. This results to inability to reconnect to executors during upgrades which leads to wiping of all running tasks.
This should be mentioned on docker hub page.
If there is a solution, I'd love to know about it.
docker run --net=host --privileged=true -p 5051:5051 \
-e MESOS_LOG_DIR=/var/log/mesos \
-e MESOS_MASTER=zk://<master-ip>:2181/mesos \
-e MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins \
-e MESOS_HOSTNAME=<slave-ip> \
-e MESOS_ISOLATOR=cgroups/cpu,cgroups/mem \
-e MESOS_CONTAINERIZERS=docker,mesos \
-e MESOS_PORT=5051 \
-e MESOS_IP=<slave-ip> \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /cgroup:/cgroup \
-v /sys:/sys \
-v /proc:/proc \
-v /usr/bin/docker:/usr/bin/docker \
-t mesosphere/mesos-slave:0.22.0-1.0.ubuntu1404
Here is the output on CentOS 7 & docker version 1.6.0:
I0614 05:20:06.284247 1 logging.cpp:172] INFO level logging started!
I0614 05:20:06.284539 1 main.cpp:156] Build: 2015-03-26 13:35:27 by root
I0614 05:20:06.284554 1 main.cpp:158] Version: 0.22.0
I0614 05:20:06.284560 1 main.cpp:161] Git tag: 0.22.0
I0614 05:20:06.284566 1 main.cpp:165] Git SHA: e890e2414903bb69cab730d5204f10b10d2e91bb
Failed to create a containerizer: Could not create DockerContainerizer: Failed to execute 'docker version': exited with status 127
Slave works fine without -e MESOS_CONTAINERIZERS=docker,mesos
Could someone please build image for mesos master & mesos slave 0.23.0? It would allow for easier testing current marathon with 0.23.0 and should also fix #6 .
Hi
i tried to run mesos-slave container in both redhat e ubuntu but i got this error :
I0215 11:10:06.103756 1 logging.cpp:188] INFO level logging started!
I0215 11:10:06.104004 1 main.cpp:223] Build: 2016-03-17 17:45:11 by root
I0215 11:10:06.104017 1 main.cpp:225] Version: 0.28.0
I0215 11:10:06.104022 1 main.cpp:228] Git tag: 0.28.0
I0215 11:10:06.104027 1 main.cpp:232] Git SHA: 961edbd82e691a619a4c171a7aadc9c32957fa73
SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.29: No such file or directory
Failed to create a containerizer: Could not create DockerContainerizer: Failed to create docker: Failed to get docker version: Failed to execute 'docker -H unix:///var/run/docker.sock --version': exited with status 127
OS X El Capitan 10.11.5
$ docker --version
Docker version 1.12.0-rc2, build 906eacd, experimental
Steps To Reproduce:
Actual Result:
Mesos slave and mesos master both run and exit with this result
$ docker logs mesos_master
Failed to obtain the IP address for 'moby'; the DNS service may not be able to resolve it: Name or service not known
I try to connect my cli to master with tag 1.0.11.0.1-2.0.93.ubuntu1404
but i got this error
This version of the DC/OS CLI is not supported for your cluster. Please downgrade the CLI to an older version: https://dcos.io/docs/usage/cli/update/#downgrade
Which client version I have to install?
Mesoscloud docker files (https://github.com/mesoscloud) don't include the newest versions of:
There are also some disadvantages of the mesoscloud images:
In one docker
container I had installed through your debian
repo mesos
and marathon
.
In one tmux
pane I had launched mesos-master --registry=in_memory
- web ui works on 5050 port, in second pane I try to launch marathon
, but web ui dont work here.
Can you help?
Cant start a mesos slave whith MESOS_CONTAINERIZERS param.
My setup:
Ubuntu 16.04.1 LTS
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.6.2
Git commit: b9f10c9
Built: Thu, 16 Jun 2016 21:17:51 +1200
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.6.2
Git commit: b9f10c9
Built: Thu, 16 Jun 2016 21:17:51 +1200
OS/Arch: linux/amd64
docker run -d -e MESOS_HOSTNAME=127.0.0.1 -e MESOS_IP=127.0.0.1 -e MESOS_PORT=5051 -e MESOS_MASTER=zk://127.0.0.1:2181/mesos -e MESOS_SWITCH_USER=0 -e MESOS_CONTAINERIZERS=docker,mesos -v /var/run/docker.sock:/var/run/docker.sock -v /sys/fs/cgroup:/sys/fs/cgroup -v /usr/local/bin/docker:/usr/local/bin/docker --name mesos-slave --net host --privileged --restart always mesosphere/mesos-slave:0.28.0-2.0.16.ubuntu1404
I1220 10:54:55.779671 1 main.cpp:223] Build: 2016-03-17 17:45:11 by root
I1220 10:54:55.779742 1 main.cpp:225] Version: 0.28.0
I1220 10:54:55.779747 1 main.cpp:228] Git tag: 0.28.0
I1220 10:54:55.779748 1 main.cpp:232] Git SHA: 961edbd82e691a619a4c171a7aadc9c32957fa73
Failed to create a containerizer: Could not create DockerContainerizer: Failed to create docker: Failed to get docker version: Failed to execute 'docker -H unix:///var/run/docker.sock --version': exited with status 127
It was released on Nov 21, it would be nice to see docker containers for it.
Bridge networking with docker is a very nice feature to have.
Mesos-slave in docker creates zombie apocalypse:
web33 ~ # docker exec -it 23bc904c2842 ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.8 0.1 889288 13504 ? Ssl Feb09 76:38 mesos-slave --containerizers=docker,mesos --hostname=web33 --ip=192.168.0.34 --master=zk://web488:2181,web489:2181,
root 77 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 122 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 139 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 189 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 205 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 255 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 270 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 325 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 344 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 394 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 413 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 466 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 484 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 536 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 553 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 602 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 622 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 672 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 688 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 745 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 771 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 825 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 840 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 895 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 914 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 976 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 999 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 1040 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 1073 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 1108 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 1135 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 1191 0.0 0.0 0 0 ? Z Feb09 0:00 [docker] <defunct>
root 1208 0.0 0.0 0 0 ? Z Feb09 0:00 [sh] <defunct>
root 1366 0.0 0.0 0 0 ? Z Feb10 0:00 [docker] <defunct>
root 1382 0.0 0.0 0 0 ? Z Feb10 0:00 [sh] <defunct>
root 1433 0.0 0.0 0 0 ? Z Feb10 0:00 [docker] <defunct>
root 1449 0.0 0.0 0 0 ? Z Feb10 0:00 [sh] <defunct>
root 1501 0.0 0.0 0 0 ? Z Feb10 0:00 [docker] <defunct>
root 1521 0.0 0.0 0 0 ? Z Feb10 0:00 [sh] <defunct>
root 1563 0.0 0.0 0 0 ? Z Feb10 0:00 [docker] <defunct>
root 1578 0.0 0.0 0 0 ? Z Feb10 0:00 [sh] <defunct>
root 1617 0.0 0.0 0 0 ? Z Feb10 0:00 [docker] <defunct>
root 1634 0.0 0.0 0 0 ? Z Feb10 0:00 [sh] <defunct>
root 1676 0.0 0.0 0 0 ? Z Feb10 0:00 [docker] <defunct>
root 1694 0.0 0.0 0 0 ? Z Feb10 0:00 [sh] <defunct>
root 1737 0.0 0.0 0 0 ? Z Feb10 0:00 [docker] <defunct>
root 1752 0.0 0.0 0 0 ? Z Feb10 0:00 [sh] <defunct>
root 1790 0.0 0.0 0 0 ? Z Feb10 0:00 [docker] <defunct>
root 1812 0.0 0.0 0 0 ? Z Feb10 0:00 [sh] <defunct>
root 1858 0.0 0.0 0 0 ? Z Feb10 0:00 [docker] <defunct>
root 1874 0.0 0.0 0 0 ? Z Feb10 0:00 [sh] <defunct>
root 1974 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 1992 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2030 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2047 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2115 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2129 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2166 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2184 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2224 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2272 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2288 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2327 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2349 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2392 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2404 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2454 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2512 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2534 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2535 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2573 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2590 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2633 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2649 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2687 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2736 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2754 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2792 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2816 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2817 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2859 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2877 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2918 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2932 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 2978 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 2991 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3012 0.0 0.0 0 0 ? Z Feb11 0:00 [sleep] <defunct>
root 3033 0.0 0.0 0 0 ? Z Feb11 0:00 [sleep] <defunct>
root 3071 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3088 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3143 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3186 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3250 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3258 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3300 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3363 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3374 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3376 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3418 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3430 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3470 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3485 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3525 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3576 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3594 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3633 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3654 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3663 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3704 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3726 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3728 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3777 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3831 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3882 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3918 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3919 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3920 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 3960 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 3979 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4018 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4037 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4077 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4093 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4132 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4149 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4187 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4203 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4244 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4292 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4309 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4359 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4401 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4418 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4459 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4476 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4521 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4537 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4575 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4593 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4639 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4664 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4704 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4721 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4762 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4784 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 4855 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 4872 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 5135 0.0 0.0 0 0 ? Z Feb11 0:00 [docker] <defunct>
root 5151 0.0 0.0 0 0 ? Z Feb11 0:00 [sh] <defunct>
root 5296 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5320 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 5366 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5417 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5435 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 5436 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 5484 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5509 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 5551 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5569 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 5613 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5638 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 5687 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5711 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 5749 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5765 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 5829 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5899 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5911 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 5959 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 5975 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6026 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6040 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6102 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6112 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6163 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6173 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6234 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6244 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6302 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6320 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6370 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6377 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6431 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6439 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6487 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6496 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6552 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6563 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6611 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6621 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6667 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6679 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6731 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6747 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6797 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6810 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6858 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6879 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 6937 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 6953 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 7007 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 7021 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 7079 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 7088 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 7134 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 7147 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 7208 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 7225 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 7234 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 7281 0.0 0.0 0 0 ? Z Feb12 0:00 [docker] <defunct>
root 7299 0.0 0.0 0 0 ? Z Feb12 0:00 [sh] <defunct>
root 7509 0.0 0.0 0 0 ? Z Feb13 0:00 [docker] <defunct>
root 7524 0.0 0.0 0 0 ? Z Feb13 0:00 [sh] <defunct>
root 7572 0.0 0.0 0 0 ? Z Feb13 0:00 [docker] <defunct>
root 7625 0.0 0.0 0 0 ? Z Feb13 0:00 [docker] <defunct>
root 7640 0.0 0.0 0 0 ? Z Feb13 0:00 [sh] <defunct>
root 7642 0.0 0.0 0 0 ? Z Feb13 0:00 [sh] <defunct>
root 7684 0.0 0.0 0 0 ? Z Feb13 0:00 [docker] <defunct>
root 7699 0.0 0.0 0 0 ? Z Feb13 0:00 [sh] <defunct>
root 7740 0.0 0.0 0 0 ? Z Feb13 0:00 [docker] <defunct>
root 7755 0.0 0.0 0 0 ? Z Feb13 0:00 [sh] <defunct>
root 7794 0.0 0.0 0 0 ? Z Feb13 0:00 [docker] <defunct>
root 7812 0.0 0.0 0 0 ? Z Feb13 0:00 [sh] <defunct>
root 7924 0.0 0.0 0 0 ? Z Feb14 0:00 [docker] <defunct>
root 7944 0.0 0.0 0 0 ? Z Feb14 0:00 [sh] <defunct>
root 7984 0.0 0.0 0 0 ? Z Feb14 0:00 [docker] <defunct>
root 8000 0.0 0.0 0 0 ? Z Feb14 0:00 [sh] <defunct>
root 8013 0.0 0.0 4440 656 ? Ss Feb14 0:00 sh -c /usr/libexec/mesos/mesos-executor --override /bin/sh -c 'exit `docker wait mesos-8ac0ef76-c84f-41ad-83ae-67cc
root 8014 0.0 0.0 4440 648 ? S Feb14 0:00 sh -c logs() { docker logs --follow $1 & pid=$! docker wait $1 >/dev/null 2>&1 sleep 10 kill -TERM $pid >
root 8015 0.2 0.1 733268 10788 ? Sl Feb14 2:30 /usr/libexec/mesos mesos-executor --override /bin/sh -c exit `docker wait mesos-8ac0ef76-c84f-41ad-83ae-67ccabef0ce
root 8016 0.0 0.0 193924 6680 ? Sl Feb14 0:00 docker logs --follow mesos-8ac0ef76-c84f-41ad-83ae-67ccabef0cec
root 8017 0.0 0.0 202120 6484 ? Sl Feb14 0:00 docker wait mesos-8ac0ef76-c84f-41ad-83ae-67ccabef0cec
root 8035 0.0 0.0 4440 656 ? Ss Feb14 0:00 /bin/sh -c exit `docker wait mesos-8ac0ef76-c84f-41ad-83ae-67ccabef0cec`
root 8039 0.0 0.0 202120 6508 ? Sl Feb14 0:00 docker wait mesos-8ac0ef76-c84f-41ad-83ae-67ccabef0cec
root 8057 0.0 0.0 0 0 ? Z Feb14 0:00 [sh] <defunct>
root 8076 0.0 0.0 15564 1136 ? R+ 12:39 0:00 ps aux
We can reuse init from phusion/baseimage to properly reap child processes.
Hi,
I have been playing with mesos-master and mesos-slave with a goal of running it at production grade. Everything went fine with this documentation: https://github.com/mesosphere/docker-containers/tree/master/mesos... But! On mesos-slave failure, recover would fail and leave orphan docker containers.
I tried using docker_mesos_image option to also run executor inside a docker container, but it failed to run as the mesos-slave image has an entrypoint ("mesos-slave"), and executor is ran using a command => the command executed inside executor looks like this:
mesos-slave /var/lib/mesos-executor --blabla...
The solution I came up with is the next one:
That way, everything recovers well after a failure, and there are no orphan containers left.
Hopefully it helps.
Hi,
In your README, you give the following instructions for launching a Mesos agent node from your Docker image:
docker run -d --net=host --privileged \
-e MESOS_PORT=5051 \
-e MESOS_MASTER=zk://127.0.0.1:2181/mesos \
-e MESOS_SWITCH_USER=0 \
-e MESOS_CONTAINERIZERS=docker,mesos \
-e MESOS_LOG_DIR=/var/log/mesos \
-e MESOS_WORK_DIR=/var/tmp/mesos \
-v "$(pwd)/log/mesos:/var/log/mesos" \
-v "$(pwd)/tmp/mesos:/var/tmp/mesos" \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /cgroup:/cgroup \
-v /sys:/sys \
-v /usr/local/bin/docker:/usr/local/bin/docker \
mesosphere/mesos-slave:0.28.0-2.0.16.ubuntu1404
According to these instructions, the agent container shall be launched in privileged mode. However, I did not find any missing or faulty behavior in practice when omitting the --privileged
flag.
The Docker docs say the following about privileged containers:
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host.
So, adhering to the "least privileges principle", a container should only run in privileged mode if it absolutely has to, so we should consider updating the instructions. The correct run configuration I'd propose:
docker run -d --net=host \
-e MESOS_PORT=5051 \
-e MESOS_MASTER=zk://127.0.0.1:2181/mesos \
-e MESOS_SWITCH_USER=0 \
-e MESOS_CONTAINERIZERS=docker,mesos \
-e MESOS_LOG_DIR=/var/log/mesos \
-e MESOS_WORK_DIR=/var/tmp/mesos \
-v "$(pwd)/log/mesos:/var/log/mesos" \
-v "$(pwd)/tmp/mesos:/var/tmp/mesos" \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /cgroup:/cgroup \
-v /sys:/sys \
-v /usr/local/bin/docker:/usr/local/bin/docker \
mesosphere/mesos-slave:0.28.0-2.0.16.ubuntu1404
Do you agree with this or can you otherwise elaborate on why the status quo is correct from your point of view?
Thanks :)
Hi there,
I am using RHEL 7 and the mesos-master works fine.
However when I launch the mesos-slave I get the error:
SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.30: No such file or directory
Failed to create a containerizer: Could not create DockerContainerizer: Failed to create docker: Failed to get docker version: Failed to execute 'docker -H unix:///var/run/docker.sock --version': exited with status 127
Can someone please give me a hint on what's wrong?
I started trying different versions of the images and the only one working was the first one (0.19.1)
Thanks a lot!
Any chance to get a Docker image for Mesos 0.25?
It would be great to have official container for chronos too.
Hi,
It seems that latest mesosphere/mesos-slave:0.26.0-0.2.145.ubuntu1404
has an issue with Docker 1.10.
Using this compose file to launch Mesos+Marathon:
zk:
image: bobrik/zookeeper
net: host
environment:
ZK_CONFIG: tickTime=2000,initLimit=10,syncLimit=5,maxClientCnxns=128,forceSync=no,clientPort=2181
ZK_ID: 1
master:
image: mesosphere/mesos-master:0.26.0-0.2.145.ubuntu1404
net: host
environment:
MESOS_ZK: zk://127.0.0.1:2181/mesos
MESOS_HOSTNAME: 127.0.0.1
MESOS_IP: 127.0.0.1
MESOS_QUORUM: 1
MESOS_CLUSTER: docker-compose
MESOS_WORK_DIR: /var/lib/mesos
slave:
image: mesosphere/mesos-slave:0.26.0-0.2.145.ubuntu1404
net: host
pid: host
privileged: true
environment:
MESOS_MASTER: zk://127.0.0.1:2181/mesos
MESOS_HOSTNAME: 127.0.0.1
MESOS_IP: 127.0.0.1
MESOS_CONTAINERIZERS: docker,mesos
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup
- /usr/bin/docker:/usr/bin/docker:ro
- /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1:ro
- /var/run/docker.sock:/var/run/docker.sock
marathon:
image: mesosphere/marathon:v0.13.0
net: host
environment:
MARATHON_MASTER: zk://127.0.0.1:2181/mesos
MARATHON_ZK: zk://127.0.0.1:2181/marathon
MARATHON_HOSTNAME: 127.0.0.1
command: --event_subscriber http_callback
Mesos slave exits with an error on docker version:
I0210 13:11:48.041368 18353 main.cpp:190] Build: 2015-12-16 23:04:39 by root
I0210 13:11:48.047538 18353 main.cpp:192] Version: 0.26.0
I0210 13:11:48.047550 18353 main.cpp:195] Git tag: 0.26.0
I0210 13:11:48.047561 18353 main.cpp:199] Git SHA: d3717e5c4d1bf4fca5c41cd7ea54fae489028faa
Failed to create a containerizer: Could not create DockerContainerizer: Failed to create docker: Failed to get docker version: Failed to execute 'docker -H unix:///var/run/docker.sock --version': exited with status 127
$ docker version
Client:
Version: 1.10.0
API version: 1.22
Go version: go1.5.3
Git commit: 590d5108
Built: Thu Feb 4 18:36:33 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.0
API version: 1.22
Go version: go1.5.3
Git commit: 590d5108
Built: Thu Feb 4 18:36:33 2016
OS/Arch: linux/amd64
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.