Git Product home page Git Product logo

rak8s's Introduction

rak8s

(pronounced rackets - /ˈrækɪts/)

Stand up a Raspberry Pi based Kubernetes cluster with Ansible

rak8s is maintained by Chris Short and a community of open source folks willing to help.

Why?

  • Raspberry Pis are rad
  • Ansible is awesome
  • Kubernetes is keen

ARM is going to be the datacenter and home computing platform of the future. It makes a lot of sense to start getting used to working in its unique environment.

Also, it's cheaper than a year of GKE. Plus, why not run Kubernetes in your home?

Prerequisites

Hardware

  • Raspberry Pi 3 (3 or more)
  • Class 10 SD Cards
  • Network connection (wireless or wired) with access to the internet

Software

  • Raspbian Lite (installed on each Raspberry Pi)

  • Raspberry Pis should have static IPs

    • Requirement for Kubernetes and Ansible inventory
    • You can set these via OS configuration or DHCP reservations (your choice)
  • Ability to SSH into all Raspberry Pis and escalate privileges with sudo

    • The pi user is fine
    • Please change the pi user's password
  • Ansible 2.7.1 or higher

  • kubectl should be available on the system you intend to use to interact with the Kubernetes cluster.

    • If you are going to login to one of the Raspberry Pis to interact with the cluster kubectl is installed and configured by default on the master Kubernetes master.
    • If you are administering the cluster from a remote machine (your laptop, desktop, server, bastion host, etc.) kubectl will not be installed on the remote machine but it will be configured to interact with the newly built cluster once kubectl is installed.

Recommendations

  • Setup SSH key pairs so your password is not required every time Ansible runs

Stand Up Your Kubernetes Cluster

Download the latest release or clone the repo:

git clone https://github.com/rak8s/rak8s.git

Modify ansible.cfg and inventory

Modify the inventory file to suit your environment. Change the names to your liking and the IPs to the addresses of your Raspberry Pis.

If your SSH user on the Raspberry Pis are not the Raspbian default pi user modify remote_user in the ansible.cfg.

Confirm Ansible is working with your Raspberry Pis:

ansible -m ping all

This may fail to ping if you have not setup SSH keys and only configured your Pi's with passwords

Deploy, Deploy, Deploy

ansible-playbook cluster.yml

Interact with Kubernetes

CLI

Test your Kubernetes cluster is up and running:

kubectl get nodes

The output should look something like this:

NAME       STATUS    ROLES     AGE       VERSION
pik8s000   Ready     master    2d        v1.9.1
pik8s001   Ready     <none>    2d        v1.9.1
pik8s002   Ready     <none>    2d        v1.9.1
pik8s003   Ready     <none>    2d        v1.9.1
pik8s005   Ready     <none>    2d        v1.9.1
pik8s004   Ready     <none>    2d        v1.9.1

Dashboard

rak8s installs the non-HTTPS version of the Kubernetes dashboard. This is not recommended for production clusters but, it simplifies the setup. Access the dashboard by running:

kubectl proxy

Then open a web browser and navigate to: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Need to Start Over?

Did something go wrong? Nodes fail some process or not joined to the cluster? Break Docker Versions with apt-update?

Try the process again from the beginning:

ansible-playbook cleanup.yml

Wait for everything to run and then start again with:

ansible-playbook cluster.yml

Where to Get Help

If you run into any problems please join our welcoming Discourse community. If you find a bug please open an issue and pull requests are always welcome.

Etymology

rak8s (pronounced rackets - /ˈrækɪts/)

Coined by Kendrick Coleman on 13 Jan 2018

References & Credits

These playbooks were assembled using a handful of very helpful guides:

A very special thanks to Alex Ellis and the OpenFaaS community for their assitance in answering questions and making sense of some errors.

Media Coverage

rak8s's People

Contributors

asachs01 avatar chris-short avatar clcollins avatar hectcastro avatar jaevans avatar jimhopkinsjr avatar jmeridth avatar n00tz avatar nicholasburr avatar sam-kleiner avatar sbaeurle avatar techwilk avatar tedsluis avatar tomtom215 avatar vielmetti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rak8s's Issues

do not install

this caused kernel panic on all of my pi's.
Do not install !

Fail to init masternode

During the init master step in the master role I get this error.

TASK [master : Initialize Master] ******************************************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using 
`result|succeeded` instead use `result is succeeded`. This feature will be removed in 
version 2.9. Deprecation warnings can be disabled by setting deprecation_warnings=False
 in ansible.cfg.

fatal: [pik8s005]: FAILED! => {"changed": true, "cmd": "kubeadm init --apiserver-advertise-address=192.168.1.169 --token=udy29x.ugyyk3tumg27atmr", "delta": "0:00:02.114557", "end": "2018-04-07 20:57:22.340316", "msg": "non-zero return code", "rc": 2, "start": "2018-04-07 20:57:20.225759", "stderr": "\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.0-ce. Max validated version: 17.03\n\t[WARNING FileExisting-crictl]: crictl not found in system path\nSuggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl\n[preflight] Some fatal errors occurred:\n\t[ERROR SystemVerification]: missing cgroups: memory\n[preflight] If you know what you are doing, you can make a check non-fatal with--ignore-preflight-errors=...", "stderr_lines": ["\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.0-ce. Max validated version: 17.03", "\t[WARNING FileExisting-crictl]: crictl not found in system path", "Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl", "[preflight] Some fatal errors occurred:", "\t[ERROR SystemVerification]: missing cgroups: memory", "[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=..."], "stdout": "[init] Using Kubernetes version: v1.10.0\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks.\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m4.14.30-v7+\u001b[0m\n\u001b[0;37mCONFIG_NAMESPACES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_NET_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_PID_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_IPC_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_UTS_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUPS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_DEVICE\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_SCHED\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CPUSETS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_MEMCG\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_INET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_EXT4_FS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_PROC_FS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mCONFIG_OVERLAY_FS\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mCONFIG_AUFS_FS\u001b[0m: \u001b[0;33mnot set - Required for aufs.\u001b[0m\n\u001b[0;37mCONFIG_BLK_DEV_DM\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mDOCKER_VERSION\u001b[0m: \u001b[0;32m18.03.0-ce\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;31mmissing\u001b[0m", "stdout_lines": ["[init] Using Kubernetes version: v1.10.0", "[init] Using Authorization modes: [Node RBAC]", "[preflight] Running pre-flight checks.", "[preflight] The system verification failed. Printing the output from the verification:", "\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m4.14.30-v7+\u001b[0m", "\u001b[0;37mCONFIG_NAMESPACES\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_NET_NS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_PID_NS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_IPC_NS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_UTS_NS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUPS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUP_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUP_DEVICE\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUP_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUP_SCHED\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CPUSETS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_MEMCG\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_INET\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_EXT4_FS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_PROC_FS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m", "\u001b[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m", "\u001b[0;37mCONFIG_OVERLAY_FS\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m", "\u001b[0;37mCONFIG_AUFS_FS\u001b[0m: \u001b[0;33mnot set - Required for aufs.\u001b[0m", "\u001b[0;37mCONFIG_BLK_DEV_DM\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m", "\u001b[0;37mDOCKER_VERSION\u001b[0m: \u001b[0;32m18.03.0-ce\u001b[0m", "\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m", "\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;31mmissing\u001b[0m"]}
All steps before worked great, but now stuck on this step.

Please help us @chris-short you're our only hope.

Fresh install failed on /proc/sys/net/bridge/bridge-nf-call-iptables issue and missing cgroups memory

OS running on Ansible host:

pi@ansible-host ~/git/rak8s $  uname -a
Linux ansible-host-5 4.9.35-v7+ #1014 SMP Fri Jun 30 14:47:43 BST 2017 armv7l GNU/Linux

Ansible Version (ansible --version):

pi@ansible-host ~/git/rak8s $ ansible --version
ansible 2.2.0.0
  config file = /home/pi/git/rak8s/ansible.cfg
  configured module search path = Default w/o overrides

Uploaded logs showing errors(rak8s/.log/ansible.log):

2 runs:

  • First failed on TASK [common : Pass bridged IPv4 traffic to iptables' chains].
  • Second failed on TASK [master : Initialize Master].
pi@ansible-host ~/git/rak8s $ ansible-playbook cluster.yml 

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [node1]
ok: [node2]
ok: [master]

TASK [common : Enabling cgroup options at boot] ********************************
changed: [node1]
changed: [master]
changed: [node2]

TASK [common : Pass bridged IPv4 traffic to iptables' chains] ******************
fatal: [node1]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
fatal: [master]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
fatal: [node2]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}

PLAY RECAP *********************************************************************
master                     : ok=2    changed=1    unreachable=0    failed=1   
node1                      : ok=2    changed=1    unreachable=0    failed=1   
node2                      : ok=2    changed=1    unreachable=0    failed=1   


pi@ansible-host ~/git/rak8s $ ansible-playbook cluster.yml 

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [node1]
ok: [node2]
ok: [master]

TASK [common : Enabling cgroup options at boot] ********************************
ok: [node2]
ok: [node1]
ok: [master]

TASK [common : Pass bridged IPv4 traffic to iptables' chains] ******************
ok: [node1]
ok: [master]
ok: [node2]

TASK [common : apt-get update] *************************************************
ok: [node2]
ok: [node1]
ok: [master]

TASK [common : apt-get upgrade] ************************************************
ok: [node2]
ok: [node1]
ok: [master]

TASK [common : Reboot] *********************************************************
skipping: [master]
skipping: [node1]
skipping: [node2]

TASK [common : Wait for Reboot] ************************************************
skipping: [master]
skipping: [node1]
skipping: [node2]

TASK [kubeadm : Disable Swap] **************************************************
changed: [master]
changed: [node1]
changed: [node2]

TASK [kubeadm : Determine if docker is installed] ******************************
ok: [master]
ok: [node2]
ok: [node1]

TASK [kubeadm : Run Docker Install Script] *************************************
changed: [node2]
changed: [node1]
changed: [master]

TASK [kubeadm : Install apt-transport-https] ***********************************
ok: [master]
ok: [node1]
ok: [node2]

TASK [kubeadm : Add Google Cloud Repo Key] *************************************
changed: [master]
 [WARNING]: Consider using get_url or uri module rather than running curl

changed: [node2]
changed: [node1]

TASK [kubeadm : Add Kubernetes to Available apt Sources] ***********************
changed: [node1]
changed: [master]
changed: [node2]

TASK [kubeadm : apt-get update] ************************************************
changed: [node2]
changed: [node1]
changed: [master]

TASK [kubeadm : Install k8s Y'all] *********************************************
changed: [node2] => (item=[u'kubelet', u'kubeadm', u'kubectl'])
changed: [node1] => (item=[u'kubelet', u'kubeadm', u'kubectl'])
changed: [master] => (item=[u'kubelet', u'kubeadm', u'kubectl'])

PLAY [master] ******************************************************************

TASK [master : Reset Kubernetes Master] ****************************************
changed: [master]

TASK [master : Initialize Master] **********************************************
fatal: [master]: FAILED! => {"changed": true, "cmd": "kubeadm init --apiserver-advertise-address=192.168.11.210 --token=udy29x.ugyyk3tumg27atmr", "delta": "0:00:02.406248", "end": "2018-05-11 20:32:27.185350", "failed": true, "rc": 2, "start": "2018-05-11 20:32:24.779102", "stderr": "\t
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03\n\t
[WARNING FileExisting-crictl]: crictl not found in system path\nSuggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl\n[preflight] Some fatal errors occurred:\n\t
[ERROR SystemVerification]: missing cgroups: memory\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`", "stdout": "
[init] Using Kubernetes version: v1.10.2\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks.\n[preflight] 
The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m4.14.34-v7+\u001b[0m\n\u001b[0;37mCONFIG_NAMESPACES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_NET_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_PID_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_IPC_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_UTS_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUPS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_DEVICE\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_SCHED\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CPUSETS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_MEMCG\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_INET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_EXT4_FS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_PROC_FS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mCONFIG_OVERLAY_FS\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mCONFIG_AUFS_FS\u001b[0m: \u001b[0;33mnot set - Required for aufs.\u001b[0m\n\u001b[0;37mCONFIG_BLK_DEV_DM\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mDOCKER_VERSION\u001b[0m: \u001b[0;32m18.05.0-ce\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;31mmissing\u001b[0m", "stdout_lines": ["
[init] Using Kubernetes version: v1.10.2", "[init] Using Authorization modes: [Node RBAC]", "[preflight] Running pre-flight checks.", "
[preflight] The system verification failed. Printing the output from the verification:", "\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m4.14.34-v7+\u001b[0m", "\u001b[0;37mCONFIG_NAMESPACES\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_NET_NS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_PID_NS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_IPC_NS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_UTS_NS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUPS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUP_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUP_DEVICE\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUP_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CGROUP_SCHED\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_CPUSETS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_MEMCG\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_INET\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_EXT4_FS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_PROC_FS\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m", "\u001b[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m", "\u001b[0;37mCONFIG_OVERLAY_FS\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m", "\u001b[0;37mCONFIG_AUFS_FS\u001b[0m: \u001b[0;33mnot set - Required for aufs.\u001b[0m", "\u001b[0;37mCONFIG_BLK_DEV_DM\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m", "\u001b[0;37mDOCKER_VERSION\u001b[0m: \u001b[0;32m18.05.0-ce\u001b[0m", "\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m", "\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m", "\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;31mmissing\u001b[0m"], "warnings": []}

PLAY RECAP *********************************************************************
master                     : ok=14   changed=7    unreachable=0    failed=1   
node1                      : ok=13   changed=6    unreachable=0    failed=0   
node2                      : ok=13   changed=6    unreachable=0    failed=0 

Raspberry Pi Hardware Version:

3B+

Raspberry Pi OS & Version (cat /etc/os-release):

$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

$ uname -a
Linux master 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux

Detailed description of the 3 issues:

I started this ansible-playbook cluster.yml install on a set of fresh raspberry pi’s (new raspbian lite image, release data 2018-04-18). I did run into the following issues:

    1. The first attempt I run into this error: Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory.
    1. The second attempt I run into this error: SystemVerification: missing cgroups: memory.
    1. Together with the missing cgroups: memory error, I got this warning: SystemVerification: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03.

Although you can work around these issues (via re-runs and reboots), it would be nice to fix these issues for user experience of new users.

1) Detailed description of the first issue:

Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory.
After this error the playbook stops, because the key /proc/sys/net/bridge/bridge-nf-call-iptables didn’t exists. When you run the playbook for the second time, this error won’t occur, because the key exists by then. Not a real error and I think it can be fixed by added ignoreerrors: yes to the sysctl command in the playbook. I will test it and then provide a pull request for this.

This issue was already reported in #13, closed without a solution.

2) Detailed description of the second issue:

*SystemVerification: missing cgroups: memory. *
From the Ansible log I can see that the Task Enabling cgroup options at boot was performed well on all the raspberry pi’s, as you can see below:

$ ls -l /boot/cmdline.txt
-rwxr-xr-x 1 root root 194 May 11 17:51 /boot/cmdline.txt
$ cat /boot/cmdline.txt
dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

I also see from the log that the reboot hasn’t taken place, as you can see below:

$ uptime
 21:38:16 up  12:14,  1 user,  load average: 0.03, 0.05, 0.07

This issue was already reported in #12, but closed without a solution.

I haven’t figured out why the reboots where skipped. The /boot/cmdline.txt file was modified by the playbook, but that didn’t triggered the reboot. When I modified the file afterwards and re-run the playbook, it did triggered the reboot?! I have experienced the issue every time I start with fresh raspbian images. Anyone?

3) Detailed description of the warning:

The playbook installed Docker version 18.05.0-ce. Later the playbook warns at the Task Initialize Master (kubeadm init) about the fact that kubeadm doesn’t support a Docker version higher than 17.03. Not an error, but this could cause issues as I have seen before in production environments.

Docker version

$ sudo docker version
Client:
 Version:      18.05.0-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   f150324
 Built:        Wed May  9 22:24:36 2018
 OS/Arch:      linux/arm
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.05.0-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   f150324
  Built:        Wed May  9 22:20:37 2018
  OS/Arch:      linux/arm
  Experimental: false

In the latest kubeadm documentation ( https://kubernetes.io/docs/setup/independent/install-kubeadm/ ) I found this comment about the Docker version:

Version v1.12 is recommended, but v1.11, v1.13 and 17.03 are known to work as well. 
Versions 17.06+ might work, but have not yet been tested and verified by the Kubernetes node team.

So Docker latest can’t be used any more. We should install Docker 17.03. Currently the Docker install script used in the playbook Task Run Docker Install Script doesn’t support a specific Docker version, see https://docs.docker.com/install/linux/docker-ce/debian/#install-using-the-convenience-script

The script does not provide options to specify which version of `Docker` to install, 
and installs the latest version that is released in the `edge` channel.

Another issue is that Docker 17.03 has become deprecated: https://docs.docker.com/release-notes/docker-ce/#17032-ce-2017-05-29
I changed the edge channel to stable channel within /etc/apt/sources.list.d/docker.list and I noticed that Docker 17.03 is no longer available:

$ sudo sed -i ‘s/edge/stable’’ /etc/apt/sources.list.d/docker.list
$ sudo apt-get update
Hit:1 http://archive.raspberrypi.org/debian stretch InRelease
Hit:2 http://raspbian.raspberrypi.org/raspbian stretch InRelease          
Hit:4 https://download.docker.com/linux/raspbian stretch InRelease             
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Get:5 https://download.docker.com/linux/raspbian stretch/stable armhf Packages [2,507 B]
Fetched 2,507 B in 2s (847 B/s)       
Reading package lists... Done
$ sudo apt-cache madison docker-ce 
 docker-ce | 18.03.1~ce-0~raspbian | https://download.docker.com/linux/raspbian stretch/stable armhf Packages
 docker-ce | 18.03.0~ce-0~raspbian | https://download.docker.com/linux/raspbian stretch/stable armhf Packages
 docker-ce | 17.12.1~ce-0~raspbian | https://download.docker.com/linux/raspbian stretch/stable armhf Packages
 docker-ce | 17.12.0~ce-0~raspbian | https://download.docker.com/linux/raspbian stretch/stable armhf Packages
 docker-ce | 17.09.1~ce-0~raspbian | https://download.docker.com/linux/raspbian stretch/stable armhf Packages
 docker-ce | 17.09.0~ce-0~raspbian | https://download.docker.com/linux/raspbian stretch/stable armhf Packages

Unfortunately, unlike debian or other Linux distros, raspbian doesn’t has isn’t own Docker 17.03 package.

For many reason it would be wise to not deviate from the by Kubernetes requested Docker version. Docker 18.05 is an so called edge release. For stability and compatibility it may be wise to switch over to stable Docker releases, see https://docs.docker.com/release-notes/docker-ce

To install a stable Docker version you must uninstall Docker as following:

$ sudo apt-get remove --auto-remove docker
$ sudo rm -rf /var/lib/docker

Switch to stable:

$ sudo sed -i ‘s/edge/stable’’ /etc/apt/sources.list.d/docker.list
$ sudo apt-get update

Choose a version:

$ sudo apt-cache madison docker-ce

Install your version:

$ sudo apt-get install docker-ce=<your version>

Would it be wise to provide fixed versions for Docker and Kubernetes?

failed: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

I got the following failure on this task:

TASK [common : Pass bridged IPv4 traffic to iptables' chains] ****************** fatal: [black]: FAILED! => {"changed": false, "failed": true, "msg": "setting net.bridge.bridge-nf-call-iptables failed: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}

To fix, I had to ssh to each of my nodes and run the following commands:
sudo modprobe br_netfilter sudo sysctl -p

The ansible playbook would then proceed

Reboot 'wait_for' always times out during cleanup

OS running on Ansible host:

ubuntu 18.04

Ansible Version (ansible --version):

ansible 2.7.7
  config file = /home/tom/k8s/rak8s/ansible.cfg
  configured module search path = [u'/home/tom/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]

Uploaded logs showing errors(rak8s/.log/ansible.log)

*****************************************
2019-02-10 11:14:13,959 p=23225 u=tom |  changed: [master-1]
2019-02-10 11:14:13,997 p=23225 u=tom |  changed: [node-1]
2019-02-10 11:14:14,027 p=23225 u=tom |  TASK [cleanup : Wait for Reboot] ****************************************************************************************************
2019-02-10 11:16:15,793 p=23225 u=tom |  fatal: [master-1 -> localhost]: FAILED! => {"changed": false, "elapsed": 121, "msg": "Timeout when waiting for master-1:22"}
2019-02-10 11:16:15,796 p=23225 u=tom |  fatal: [node-1 -> localhost]: FAILED! => {"changed": false, "elapsed": 121, "msg": "Timeout when waiting for node-1:22"}
2019-02-10 11:16:15,799 p=23225 u=tom |  PLAY RECAP **************************************************************************************************************************
2019-02-10 11:16:15,800 p=23225 u=tom |  master-1                   : ok=6    changed=5    unreachable=0    failed=1   
2019-02-10 11:16:15,800 p=23225 u=tom |  node-1                     : ok=6    changed=5    unreachable=0    failed=1 

Raspberry Pi Hardware Version:

3B

Raspberry Pi OS & Version (cat /etc/os-release):

PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

Detailed description of the issue:

As part of the cleanup task the wait_for task always times out. I resolved this issue by using the wait_for_connection instead. Regardless of the underlying issue, I think this is a more succinct way to do this.

If you're happy for this change I'll submit a PR

This is not working out of the box on Raspbian Stretch Lite 2018-03-13

Brand new install, and this is what I get:

fatal: [pi205]: FAILED! => {
    "changed": true,
    "cmd": "kubeadm init --apiserver-advertise-address=192.168.1.205 --token=udy29x.ugyyk3tumg27atmr --ignore-preflight-errors=all",
    "delta": "0:03:18.056646",
    "end": "2018-03-21 03:21:37.487321",
    "invocation": {
        "module_args": {
            "_raw_params": "kubeadm init --apiserver-advertise-address=192.168.1.205 --token=udy29x.ugyyk3tumg27atmr --ignore-preflight-errors=all",
            "_uses_shell": true,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "warn": true
        }
    },
    "msg": "non-zero return code",
    "rc": 1,
    "start": "2018-03-21 03:18:19.430675",
    "stderr": "\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.02.0-ce. Max validated version: 17.03\n\t[WARNING SystemVerification]: missing cgroups: memory\n\t[WARNING FileExisting-crictl]: crictl not found in system path\ncouldn't initialize a Kubernetes cluster",
    "stderr_lines": [
        "\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.02.0-ce. Max validated version: 17.03",
        "\t[WARNING SystemVerification]: missing cgroups: memory",
        "\t[WARNING FileExisting-crictl]: crictl not found in system path",
        "couldn't initialize a Kubernetes cluster"
    ],
    "stdout": "[init] Using Kubernetes version: v1.9.5\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks.\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m4.9.80-v7+\u001b[0m\n\u001b[0;37mCONFIG_NAMESPACES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_NET_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_PID_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_IPC_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_UTS_NS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUPS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_DEVICE\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CGROUP_SCHED\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_CPUSETS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_MEMCG\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_INET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_EXT4_FS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_PROC_FS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mCONFIG_OVERLAY_FS\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mCONFIG_AUFS_FS\u001b[0m: \u001b[0;33mnot set - Required for aufs.\u001b[0m\n\u001b[0;37mCONFIG_BLK_DEV_DM\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m\n\u001b[0;37mDOCKER_VERSION\u001b[0m: \u001b[0;32m18.02.0-ce\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;31mmissing\u001b[0m\n[certificates] Generated ca certificate and key.\n[certificates] Generated apiserver certificate and key.\n[certificates] apiserver serving cert is signed for DNS names [raspberrypi kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.205]\n[certificates] Generated apiserver-kubelet-client certificate and key.\n[certificates] Generated sa key and public key.\n[certificates] Generated front-proxy-ca certificate and key.\n[certificates] Generated front-proxy-client certificate and key.\n[certificates] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"\n[kubeconfig] Wrote KubeConfig file to disk: \"admin.conf\"\n[kubeconfig] Wrote KubeConfig file to disk: \"kubelet.conf\"\n[kubeconfig] Wrote KubeConfig file to disk: \"controller-manager.conf\"\n[kubeconfig] Wrote KubeConfig file to disk: \"scheduler.conf\"\n[controlplane] Wrote Static Pod manifest for component kube-apiserver to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\n[controlplane] Wrote Static Pod manifest for component kube-controller-manager to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\n[controlplane] Wrote Static Pod manifest for component kube-scheduler to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Wrote Static Pod manifest for a local etcd instance to \"/etc/kubernetes/manifests/etcd.yaml\"\n[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory \"/etc/kubernetes/manifests\".\n[init] This might take a minute or longer if the control plane images have to be pulled.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\t- There is no internet connection, so the kubelet cannot pull the following control plane images:\n\t\t- gcr.io/google_containers/kube-apiserver-arm:v1.9.5\n\t\t- gcr.io/google_containers/kube-controller-manager-arm:v1.9.5\n\t\t- gcr.io/google_containers/kube-scheduler-arm:v1.9.5\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'",
    "stdout_lines": [
        "[init] Using Kubernetes version: v1.9.5",
        "[init] Using Authorization modes: [Node RBAC]",
        "[preflight] Running pre-flight checks.",
        "[preflight] The system verification failed. Printing the output from the verification:",
        "\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m4.9.80-v7+\u001b[0m",
        "\u001b[0;37mCONFIG_NAMESPACES\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_NET_NS\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_PID_NS\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_IPC_NS\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_UTS_NS\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_CGROUPS\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_CGROUP_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_CGROUP_DEVICE\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_CGROUP_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_CGROUP_SCHED\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_CPUSETS\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_MEMCG\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_INET\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_EXT4_FS\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_PROC_FS\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m",
        "\u001b[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m",
        "\u001b[0;37mCONFIG_OVERLAY_FS\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m",
        "\u001b[0;37mCONFIG_AUFS_FS\u001b[0m: \u001b[0;33mnot set - Required for aufs.\u001b[0m",
        "\u001b[0;37mCONFIG_BLK_DEV_DM\u001b[0m: \u001b[0;32menabled (as module)\u001b[0m",
        "\u001b[0;37mDOCKER_VERSION\u001b[0m: \u001b[0;32m18.02.0-ce\u001b[0m",
        "\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m",
        "\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCGROUPS_CPUACCT\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m",
        "\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;31mmissing\u001b[0m",
        "[certificates] Generated ca certificate and key.",
        "[certificates] Generated apiserver certificate and key.",
        "[certificates] apiserver serving cert is signed for DNS names [raspberrypi kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.205]",
        "[certificates] Generated apiserver-kubelet-client certificate and key.",
        "[certificates] Generated sa key and public key.",
        "[certificates] Generated front-proxy-ca certificate and key.",
        "[certificates] Generated front-proxy-client certificate and key.",
        "[certificates] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"",
        "[kubeconfig] Wrote KubeConfig file to disk: \"admin.conf\"",
        "[kubeconfig] Wrote KubeConfig file to disk: \"kubelet.conf\"",
        "[kubeconfig] Wrote KubeConfig file to disk: \"controller-manager.conf\"",
        "[kubeconfig] Wrote KubeConfig file to disk: \"scheduler.conf\"",
        "[controlplane] Wrote Static Pod manifest for component kube-apiserver to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"",
        "[controlplane] Wrote Static Pod manifest for component kube-controller-manager to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"",
        "[controlplane] Wrote Static Pod manifest for component kube-scheduler to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"",
        "[etcd] Wrote Static Pod manifest for a local etcd instance to \"/etc/kubernetes/manifests/etcd.yaml\"",
        "[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory \"/etc/kubernetes/manifests\".",
        "[init] This might take a minute or longer if the control plane images have to be pulled.",
        "[kubelet-check] It seems like the kubelet isn't running or healthy.",
        "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.",
        "[kubelet-check] It seems like the kubelet isn't running or healthy.",
        "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.",
        "[kubelet-check] It seems like the kubelet isn't running or healthy.",
        "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.",
        "[kubelet-check] It seems like the kubelet isn't running or healthy.",
        "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.",
        "[kubelet-check] It seems like the kubelet isn't running or healthy.",
        "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.",
        "[kubelet-check] It seems like the kubelet isn't running or healthy.",
        "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.",
        "[kubelet-check] It seems like the kubelet isn't running or healthy.",
        "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.",
        "[kubelet-check] It seems like the kubelet isn't running or healthy.",
        "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.",
        "[kubelet-check] It seems like the kubelet isn't running or healthy.",
        "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.",
        "",
        "Unfortunately, an error has occurred:",
        "\ttimed out waiting for the condition",
        "",
        "This error is likely caused by:",
        "\t- The kubelet is not running",
        "\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)",
        "\t- There is no internet connection, so the kubelet cannot pull the following control plane images:",
        "\t\t- gcr.io/google_containers/kube-apiserver-arm:v1.9.5",
        "\t\t- gcr.io/google_containers/kube-controller-manager-arm:v1.9.5",
        "\t\t- gcr.io/google_containers/kube-scheduler-arm:v1.9.5",
        "",
        "If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:",
        "\t- 'systemctl status kubelet'",
        "\t- 'journalctl -xeu kubelet'"
    ]
}

Task [master : Install Weave (Networking)] failed: unable to read URL \"https://git.io/weave-kube-1.6\"

Task [master : Install Weave (Networking)] failed, see below:

$ ansible-playbook cluster.yml
<snippet>
TASK [master : Install Weave (Networking)]  *************************************
fatal: [mon]: FAILED! => {"changed": true, "cmd": "kubectl apply -f https://git.io/weave-kube-1.6", "delta": "0:00:07.230639", "end": "2018-04-24 22:12:32.275825", "failed": true, "rc": 1, "start": "2018-04-24 22:12:25.045186", "stderr": "error: unable to read URL \"https://git.io/weave-kube-1.6\", server reported 404 Not Found, status code=404", "stdout": "", "stdout_lines": [], "warnings": []}

The URL 'https://git.io/weave-kube-1.6' seems no longer reachable:

$ curl https://git.io/weave-kube-1.6
No url found for weave-kube-1.6

I replaced the line below in "roles/master/tasks/main.yml":

  shell: kubectl apply -f https://git.io/weave-kube-1.6

with:

  shell: kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=1.10

And now it works:

TASK [master : Install Weave (Networking)] *************************************
changed: [mon]

TASK [master : Poke kubelet] ***************************************************
changed: [mon]

TASK [dashboard : Install k8s Dashboard] ***************************************
changed: [mon]

TASK [dashboard : Configure Dashboard Access] **********************************
changed: [mon]

TASK [dashboard : Force Rebuild Dashboard Pods] ********************************
changed: [mon]

TASK [dashboard : Fetch kubeconfig file] ***************************************
changed: [mon]

PLAY RECAP *********************************************************************
mon                        : ok=22   changed=13   unreachable=0    failed=0   

$ kubectl get nodes -o wide
NAME      STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
mon       Ready     master    10m       v1.10.1   <none>        Raspbian GNU/Linux 9 (stretch)   4.14.34-v7+      docker://18.4.0

$ kubectl describe nodes 
Name:               mon
Roles:              master
Labels:             beta.kubernetes.io/arch=arm
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=mon
                    node-role.kubernetes.io/master=
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Tue, 24 Apr 2018 23:22:40 +0200
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Tue, 24 Apr 2018 23:37:24 +0200   Tue, 24 Apr 2018 23:22:39 +0200   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Tue, 24 Apr 2018 23:37:24 +0200   Tue, 24 Apr 2018 23:22:39 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 24 Apr 2018 23:37:24 +0200   Tue, 24 Apr 2018 23:22:39 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 24 Apr 2018 23:37:24 +0200   Tue, 24 Apr 2018 23:22:39 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 24 Apr 2018 23:37:24 +0200   Tue, 24 Apr 2018 23:30:28 +0200   KubeletReady                 kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Addresses:
  InternalIP:  192.168.11.132
  Hostname:    mon
Capacity:
 cpu:                4
 ephemeral-storage:  14808816Ki
 memory:             1000184Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  13647804804
 memory:             897784Ki
 pods:               110
System Info:
 Machine ID:                 776c072bc9fb4af786604daa9d79aa52
 System UUID:                776c072bc9fb4af786604daa9d79aa52
 Boot ID:                    726b3ce1-7e97-4024-9eea-3b6179dc4aca
 Kernel Version:             4.14.34-v7+
 OS Image:                   Raspbian GNU/Linux 9 (stretch)
 Operating System:           linux
 Architecture:               arm
 Container Runtime Version:  docker://18.4.0
 Kubelet Version:            v1.10.1
 Kube-Proxy Version:         v1.10.1
ExternalID:                  mon
Non-terminated Pods:         (8 in total)
  Namespace                  Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                     ------------  ----------  ---------------  -------------
  kube-system                etcd-mon                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-mon                       250m (6%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-mon              200m (5%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-dns-686d6fb9c-wf62k                 260m (6%)     0 (0%)      110Mi (12%)      170Mi (19%)
  kube-system                kube-proxy-tktrw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-mon                       100m (2%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kubernetes-dashboard-74959b9d6c-rfdr4    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-6xzwf                          20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  830m (20%)    0 (0%)      110Mi (12%)      170Mi (19%)
Events:
  Type    Reason                   Age                 From             Message
  ----    ------                   ----                ----             -------
  Normal  NodeHasSufficientMemory  16m (x6 over 16m)   kubelet, mon     Node mon status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    16m (x6 over 16m)   kubelet, mon     Node mon status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     16m (x5 over 16m)   kubelet, mon     Node mon status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  16m                 kubelet, mon     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    16m (x21 over 16m)  kubelet, mon     Node mon status is now: NodeHasSufficientDisk
  Normal  Starting                 10m                 kube-proxy, mon  Starting kube-proxy.
  Normal  Starting                 10m                 kubelet, mon     Starting kubelet.
  Normal  NodeHasSufficientDisk    10m                 kubelet, mon     Node mon status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  10m                 kubelet, mon     Node mon status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    10m                 kubelet, mon     Node mon status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     10m                 kubelet, mon     Node mon status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  10m                 kubelet, mon     Updated Node Allocatable limit across pods
  Normal  NodeReady                7m                  kubelet, mon     Node mon status is now: NodeReady
$ kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   16m

$ kubectl -n kube-system logs weave-net-6xzwf -c weave
INFO: 2018/04/24 21:29:38.740674 Command line options: map[db-prefix:/weavedb/weave-net ipalloc-init:consensus=1 metrics-addr:0.0.0.0:6782 conn-limit:100 datapath:datapath ipalloc-range:10.32.0.0/12 name:12:dd:ea:1e:56:87 no-dns:true docker-api: http-addr:127.0.0.1:6784 expect-npc:true host-root:/host nickname:mon port:6783]
INFO: 2018/04/24 21:29:38.762990 weave  2.3.0
INFO: 2018/04/24 21:29:39.424159 Bridge type is bridged_fastdp
INFO: 2018/04/24 21:29:39.424226 Communication between peers is unencrypted.
INFO: 2018/04/24 21:29:39.912210 Our name is 12:dd:ea:1e:56:87(mon)
INFO: 2018/04/24 21:29:39.912969 Launch detected - using supplied peer list: [192.168.11.132]
INFO: 2018/04/24 21:29:39.913381 Checking for pre-existing addresses on weave bridge
INFO: 2018/04/24 21:29:39.934298 [allocator 12:dd:ea:1e:56:87] No valid persisted data
INFO: 2018/04/24 21:29:43.191558 [allocator 12:dd:ea:1e:56:87] Initialising via deferred consensus
INFO: 2018/04/24 21:29:43.192246 Sniffing traffic on datapath (via ODP)
INFO: 2018/04/24 21:29:43.194284 ->[192.168.11.132:6783] attempting connection
INFO: 2018/04/24 21:29:43.195843 ->[192.168.11.132:38015] connection accepted
INFO: 2018/04/24 21:29:43.199415 ->[192.168.11.132:38015|12:dd:ea:1e:56:87(mon)]: connection shutting down due to error: cannot connect to ourself
INFO: 2018/04/24 21:29:43.199660 ->[192.168.11.132:6783|12:dd:ea:1e:56:87(mon)]: connection shutting down due to error: cannot connect to ourself
INFO: 2018/04/24 21:29:44.203557 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2018/04/24 21:29:44.205364 Listening for metrics requests on 0.0.0.0:6782
INFO: 2018/04/24 21:29:49.954529 Error checking version: Get https://checkpoint-api.weave.works/v1/check/weave-net?arch=arm&flag_docker-version=none&flag_kernel-version=4.14.34-v7%2B&os=linux&signature=&version=2.3.0: net/http: TLS handshake timeout
INFO: 2018/04/24 21:30:29.399208 [kube-peers] Added myself to peer list &{[{12:dd:ea:1e:56:87 mon}]}
DEBU: 2018/04/24 21:30:29.469175 [kube-peers] Nodes that have disappeared: map[]
10.32.0.1

$ kubectl -n kube-system logs weave-net-6xzwf -c weave-npc
INFO: 2018/04/24 21:31:40.790476 Starting Weaveworks NPC 2.3.0; node name "mon"
INFO: 2018/04/24 21:31:40.791296 Serving /metrics on :6781
Tue Apr 24 21:31:40 2018 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
DEBU: 2018/04/24 21:31:40.938868 Got list of ipsets: []
INFO: 2018/04/24 21:31:44.607979 EVENT AddNamespace {"metadata":{"creationTimestamp":"2018-04-24T21:22:49Z","name":"default","resourceVersion":"31","selfLink":"/api/v1/namespaces/default","uid":"a383dae6-4805-11e8-9f57-b827ebcfd0f3"},"spec":{"finalizers":["kubernetes"]},"status":{"phase":"Active"}}
INFO: 2018/04/24 21:31:44.640377 creating ipset: &npc.selectorSpec{key:"", selector:labels.internalSelector{}, dst:false, ipsetType:"hash:ip", ipsetName:"weave-k?Z;25^M}|1s7P3|H9i;*;MhG", nsName:"default"}
DEBU: 2018/04/24 21:31:44.658345 ensuring rule for DefaultAllow in namespace: default, set weave-E.1.0W^NGSp]0_t5WwH/]gX@L
INFO: 2018/04/24 21:31:44.675329 EVENT AddNamespace {"metadata":{"creationTimestamp":"2018-04-24T21:22:50Z","name":"kube-public","resourceVersion":"44","selfLink":"/api/v1/namespaces/kube-public","uid":"a46509f6-4805-11e8-9f57-b827ebcfd0f3"},"spec":{"finalizers":["kubernetes"]},"status":{"phase":"Active"}}
INFO: 2018/04/24 21:31:44.683963 creating ipset: &npc.selectorSpec{key:"", selector:labels.internalSelector{}, dst:false, ipsetType:"hash:ip", ipsetName:"weave-4vtqMI+kx/2]jD%_c0S%thO%V", nsName:"kube-public"}
DEBU: 2018/04/24 21:31:44.692157 ensuring rule for DefaultAllow in namespace: kube-public, set weave-0EHD/vdN#O4]V?o4Tx7kS;APH
INFO: 2018/04/24 21:31:44.702874 EVENT AddNamespace {"metadata":{"creationTimestamp":"2018-04-24T21:22:40Z","name":"kube-system","resourceVersion":"8","selfLink":"/api/v1/namespaces/kube-system","uid":"9ea4d688-4805-11e8-9f57-b827ebcfd0f3"},"spec":{"finalizers":["kubernetes"]},"status":{"phase":"Active"}}
INFO: 2018/04/24 21:31:44.711635 creating ipset: &npc.selectorSpec{key:"", selector:labels.internalSelector{}, dst:false, ipsetType:"hash:ip", ipsetName:"weave-iuZcey(5DeXbzgRFs8Szo]+@p", nsName:"kube-system"}
DEBU: 2018/04/24 21:31:44.718027 ensuring rule for DefaultAllow in namespace: kube-system, set weave-?b%zl9GIe0AET1(QI^7NWe*fO
DEBU: 2018/04/24 21:31:47.006091 EVENT AddPod {"metadata":{"annotations":{"kubernetes.io/config.hash":"e06713970c20f3b479d1a8064b93c623","kubernetes.io/config.mirror":"e06713970c20f3b479d1a8064b93c623","kubernetes.io/config.seen":"2018-04-24T23:21:04.428677669+02:00","kubernetes.io/config.source":"file","scheduler.alpha.kubernetes.io/critical-pod":""},"creationTimestamp":"2018-04-24T21:26:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"name":"kube-scheduler-mon","namespace":"kube-system","resourceVersion":"462","selfLink":"/api/v1/namespaces/kube-system/pods/kube-scheduler-mon","uid":"3525003a-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kube-scheduler-arm:v1.10.1","imagePullPolicy":"IfNotPresent","name":"kube-scheduler","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:08Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:20Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:08Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"Burstable","startTime":"2018-04-24T21:21:08Z"}}
DEBU: 2018/04/24 21:31:47.008183 EVENT AddPod {"metadata":{"creationTimestamp":"2018-04-24T21:26:57Z","generateName":"weave-net-","labels":{"controller-revision-hash":"2689456918","name":"weave-net","pod-template-generation":"1"},"name":"weave-net-6xzwf","namespace":"kube-system","resourceVersion":"684","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-6xzwf","uid":"3763c508-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"weaveworks/weave-kube:2.3.0","imagePullPolicy":"IfNotPresent","name":"weave","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"},{"image":"weaveworks/weave-npc:2.3.0","imagePullPolicy":"IfNotPresent","name":"weave-npc","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"hostPID":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{"seLinuxOptions":{}},"serviceAccount":"weave-net","serviceAccountName":"weave-net","terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:26:57Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:31:43Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:26:57Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"Burstable","startTime":"2018-04-24T21:26:57Z"}}
DEBU: 2018/04/24 21:31:47.009417 EVENT AddPod {"metadata":{"creationTimestamp":"2018-04-24T21:28:14Z","generateName":"kubernetes-dashboard-74959b9d6c-","labels":{"k8s-app":"kubernetes-dashboard","pod-template-hash":"3051565827"},"name":"kubernetes-dashboard-74959b9d6c-rfdr4","namespace":"kube-system","resourceVersion":"651","selfLink":"/api/v1/namespaces/kube-system/pods/kubernetes-dashboard-74959b9d6c-rfdr4","uid":"65703604-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kubernetes-dashboard-arm:v1.8.3","imagePullPolicy":"IfNotPresent","name":"kubernetes-dashboard","ports":[{"containerPort":8443,"protocol":"TCP"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"kubernetes-dashboard","serviceAccountName":"kubernetes-dashboard","terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","message":"containers with unready status: [kubernetes-dashboard]","reason":"ContainersNotReady","status":"False","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Pending","qosClass":"BestEffort","startTime":"2018-04-24T21:30:29Z"}}
DEBU: 2018/04/24 21:31:47.010765 EVENT AddPod {"metadata":{"annotations":{"kubernetes.io/config.hash":"d3cf90b755d424cec3929ca67735fdcc","kubernetes.io/config.mirror":"d3cf90b755d424cec3929ca67735fdcc","kubernetes.io/config.seen":"2018-04-24T23:21:04.414713787+02:00","kubernetes.io/config.source":"file","scheduler.alpha.kubernetes.io/critical-pod":""},"creationTimestamp":"2018-04-24T21:22:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"name":"kube-apiserver-mon","namespace":"kube-system","resourceVersion":"49","selfLink":"/api/v1/namespaces/kube-system/pods/kube-apiserver-mon","uid":"a428e9d0-4805-11e8-9f57-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kube-apiserver-arm:v1.10.1","imagePullPolicy":"IfNotPresent","name":"kube-apiserver","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:05Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:20Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:05Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"Burstable","startTime":"2018-04-24T21:21:05Z"}}
DEBU: 2018/04/24 21:31:47.012032 EVENT AddPod {"metadata":{"annotations":{"kubernetes.io/config.hash":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.mirror":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.seen":"2018-04-24T23:21:04.419734697+02:00","kubernetes.io/config.source":"file","scheduler.alpha.kubernetes.io/critical-pod":""},"creationTimestamp":"2018-04-24T21:25:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"name":"kube-controller-manager-mon","namespace":"kube-system","resourceVersion":"221","selfLink":"/api/v1/namespaces/kube-system/pods/kube-controller-manager-mon","uid":"0a76faa6-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kube-controller-manager-arm:v1.10.1","imagePullPolicy":"IfNotPresent","name":"kube-controller-manager","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:22Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"Burstable","startTime":"2018-04-24T21:21:06Z"}}
DEBU: 2018/04/24 21:31:47.013248 EVENT AddPod {"metadata":{"creationTimestamp":"2018-04-24T21:25:59Z","generateName":"kube-proxy-","labels":{"controller-revision-hash":"2851376311","k8s-app":"kube-proxy","pod-template-generation":"1"},"name":"kube-proxy-tktrw","namespace":"kube-system","resourceVersion":"415","selfLink":"/api/v1/namespaces/kube-system/pods/kube-proxy-tktrw","uid":"14f14a1a-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kube-proxy-arm:v1.10.1","imagePullPolicy":"IfNotPresent","name":"kube-proxy","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"kube-proxy","serviceAccountName":"kube-proxy","terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:25:59Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:26:09Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:25:59Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"BestEffort","startTime":"2018-04-24T21:25:59Z"}}
DEBU: 2018/04/24 21:31:47.015366 EVENT AddPod {"metadata":{"creationTimestamp":"2018-04-24T21:25:59Z","generateName":"kube-dns-686d6fb9c-","labels":{"k8s-app":"kube-dns","pod-template-hash":"242829657"},"name":"kube-dns-686d6fb9c-wf62k","namespace":"kube-system","resourceVersion":"643","selfLink":"/api/v1/namespaces/kube-system/pods/kube-dns-686d6fb9c-wf62k","uid":"150c1759-4806-11e8-9447-b827ebcfd0f3"},"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"beta.kubernetes.io/arch","operator":"In","values":["arm"]}]}]}}},"containers":[{"image":"k8s.gcr.io/k8s-dns-kube-dns-arm:1.14.8","imagePullPolicy":"IfNotPresent","name":"kubedns","ports":[{"containerPort":10053,"name":"dns-local","protocol":"UDP"},{"containerPort":10053,"name":"dns-tcp-local","protocol":"TCP"},{"containerPort":10055,"name":"metrics","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/readiness","port":8081,"scheme":"HTTP"},"initialDelaySeconds":3,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"},{"image":"k8s.gcr.io/k8s-dns-dnsmasq-nanny-arm:1.14.8","imagePullPolicy":"IfNotPresent","name":"dnsmasq","ports":[{"containerPort":53,"name":"dns","protocol":"UDP"},{"containerPort":53,"name":"dns-tcp","protocol":"TCP"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"},{"image":"k8s.gcr.io/k8s-dns-sidecar-arm:1.14.8","imagePullPolicy":"IfNotPresent","name":"sidecar","ports":[{"containerPort":10054,"name":"metrics","protocol":"TCP"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"Default","nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"kube-dns","serviceAccountName":"kube-dns","terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","message":"containers with unready status: [kubedns dnsmasq sidecar]","reason":"ContainersNotReady","status":"False","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Pending","qosClass":"Burstable","startTime":"2018-04-24T21:30:29Z"}}
DEBU: 2018/04/24 21:31:47.016569 EVENT AddPod {"metadata":{"annotations":{"kubernetes.io/config.hash":"af78bd534dc721d81c47bb724654bb56","kubernetes.io/config.mirror":"af78bd534dc721d81c47bb724654bb56","kubernetes.io/config.seen":"2018-04-24T23:21:04.436347366+02:00","kubernetes.io/config.source":"file","scheduler.alpha.kubernetes.io/critical-pod":""},"creationTimestamp":"2018-04-24T21:26:51Z","labels":{"component":"etcd","tier":"control-plane"},"name":"etcd-mon","namespace":"kube-system","resourceVersion":"424","selfLink":"/api/v1/namespaces/kube-system/pods/etcd-mon","uid":"33c787ed-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/etcd-arm:3.1.12","imagePullPolicy":"IfNotPresent","name":"etcd","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:08Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:21Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:08Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"BestEffort","startTime":"2018-04-24T21:21:08Z"}}
DEBU: 2018/04/24 21:34:51.257841 EVENT UpdatePod {"metadata":{"annotations":{"kubernetes.io/config.hash":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.mirror":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.seen":"2018-04-24T23:21:04.419734697+02:00","kubernetes.io/config.source":"file","scheduler.alpha.kubernetes.io/critical-pod":""},"creationTimestamp":"2018-04-24T21:25:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"name":"kube-controller-manager-mon","namespace":"kube-system","resourceVersion":"221","selfLink":"/api/v1/namespaces/kube-system/pods/kube-controller-manager-mon","uid":"0a76faa6-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kube-controller-manager-arm:v1.10.1","imagePullPolicy":"IfNotPresent","name":"kube-controller-manager","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:22Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"Burstable","startTime":"2018-04-24T21:21:06Z"}} {"metadata":{"annotations":{"kubernetes.io/config.hash":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.mirror":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.seen":"2018-04-24T23:21:04.419734697+02:00","kubernetes.io/config.source":"file","scheduler.alpha.kubernetes.io/critical-pod":""},"creationTimestamp":"2018-04-24T21:25:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"name":"kube-controller-manager-mon","namespace":"kube-system","resourceVersion":"751","selfLink":"/api/v1/namespaces/kube-system/pods/kube-controller-manager-mon","uid":"0a76faa6-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kube-controller-manager-arm:v1.10.1","imagePullPolicy":"IfNotPresent","name":"kube-controller-manager","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:34:08Z","message":"containers with unready status: [kube-controller-manager]","reason":"ContainersNotReady","status":"False","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"Burstable","startTime":"2018-04-24T21:21:06Z"}}
DEBU: 2018/04/24 21:35:01.473382 EVENT UpdatePod {"metadata":{"annotations":{"kubernetes.io/config.hash":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.mirror":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.seen":"2018-04-24T23:21:04.419734697+02:00","kubernetes.io/config.source":"file","scheduler.alpha.kubernetes.io/critical-pod":""},"creationTimestamp":"2018-04-24T21:25:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"name":"kube-controller-manager-mon","namespace":"kube-system","resourceVersion":"751","selfLink":"/api/v1/namespaces/kube-system/pods/kube-controller-manager-mon","uid":"0a76faa6-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kube-controller-manager-arm:v1.10.1","imagePullPolicy":"IfNotPresent","name":"kube-controller-manager","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:34:08Z","message":"containers with unready status: [kube-controller-manager]","reason":"ContainersNotReady","status":"False","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"Burstable","startTime":"2018-04-24T21:21:06Z"}} {"metadata":{"annotations":{"kubernetes.io/config.hash":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.mirror":"8cbcd30d66608120fe0e799a58790368","kubernetes.io/config.seen":"2018-04-24T23:21:04.419734697+02:00","kubernetes.io/config.source":"file","scheduler.alpha.kubernetes.io/critical-pod":""},"creationTimestamp":"2018-04-24T21:25:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"name":"kube-controller-manager-mon","namespace":"kube-system","resourceVersion":"765","selfLink":"/api/v1/namespaces/kube-system/pods/kube-controller-manager-mon","uid":"0a76faa6-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kube-controller-manager-arm:v1.10.1","imagePullPolicy":"IfNotPresent","name":"kube-controller-manager","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostNetwork":true,"nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:35:01Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:21:06Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"192.168.11.132","qosClass":"Burstable","startTime":"2018-04-24T21:21:06Z"}}
DEBU: 2018/04/24 21:35:01.578120 EVENT UpdatePod {"metadata":{"creationTimestamp":"2018-04-24T21:28:14Z","generateName":"kubernetes-dashboard-74959b9d6c-","labels":{"k8s-app":"kubernetes-dashboard","pod-template-hash":"3051565827"},"name":"kubernetes-dashboard-74959b9d6c-rfdr4","namespace":"kube-system","resourceVersion":"651","selfLink":"/api/v1/namespaces/kube-system/pods/kubernetes-dashboard-74959b9d6c-rfdr4","uid":"65703604-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kubernetes-dashboard-arm:v1.8.3","imagePullPolicy":"IfNotPresent","name":"kubernetes-dashboard","ports":[{"containerPort":8443,"protocol":"TCP"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"kubernetes-dashboard","serviceAccountName":"kubernetes-dashboard","terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","message":"containers with unready status: [kubernetes-dashboard]","reason":"ContainersNotReady","status":"False","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Pending","qosClass":"BestEffort","startTime":"2018-04-24T21:30:29Z"}} {"metadata":{"creationTimestamp":"2018-04-24T21:28:14Z","generateName":"kubernetes-dashboard-74959b9d6c-","labels":{"k8s-app":"kubernetes-dashboard","pod-template-hash":"3051565827"},"name":"kubernetes-dashboard-74959b9d6c-rfdr4","namespace":"kube-system","resourceVersion":"766","selfLink":"/api/v1/namespaces/kube-system/pods/kubernetes-dashboard-74959b9d6c-rfdr4","uid":"65703604-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kubernetes-dashboard-arm:v1.8.3","imagePullPolicy":"IfNotPresent","name":"kubernetes-dashboard","ports":[{"containerPort":8443,"protocol":"TCP"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"kubernetes-dashboard","serviceAccountName":"kubernetes-dashboard","terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:35:01Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"10.32.0.2","qosClass":"BestEffort","startTime":"2018-04-24T21:30:29Z"}}
INFO: 2018/04/24 21:35:01.578531 adding entry 10.32.0.2 to weave-local-pods of 65703604-4806-11e8-9447-b827ebcfd0f3
INFO: 2018/04/24 21:35:01.578682 added entry 10.32.0.2 to weave-local-pods of 65703604-4806-11e8-9447-b827ebcfd0f3
INFO: 2018/04/24 21:35:01.632056 adding entry 10.32.0.2 to weave-iuZcey(5DeXbzgRFs8Szo]+@p of 65703604-4806-11e8-9447-b827ebcfd0f3
INFO: 2018/04/24 21:35:01.632391 added entry 10.32.0.2 to weave-iuZcey(5DeXbzgRFs8Szo]+@p of 65703604-4806-11e8-9447-b827ebcfd0f3
INFO: 2018/04/24 21:35:01.638410 adding entry 10.32.0.2 to weave-?b%zl9GIe0AET1(QI^7NWe*fO of 65703604-4806-11e8-9447-b827ebcfd0f3
INFO: 2018/04/24 21:35:01.638752 added entry 10.32.0.2 to weave-?b%zl9GIe0AET1(QI^7NWe*fO of 65703604-4806-11e8-9447-b827ebcfd0f3
DEBU: 2018/04/24 21:36:04.164586 EVENT UpdatePod {"metadata":{"creationTimestamp":"2018-04-24T21:28:14Z","generateName":"kubernetes-dashboard-74959b9d6c-","labels":{"k8s-app":"kubernetes-dashboard","pod-template-hash":"3051565827"},"name":"kubernetes-dashboard-74959b9d6c-rfdr4","namespace":"kube-system","resourceVersion":"766","selfLink":"/api/v1/namespaces/kube-system/pods/kubernetes-dashboard-74959b9d6c-rfdr4","uid":"65703604-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kubernetes-dashboard-arm:v1.8.3","imagePullPolicy":"IfNotPresent","name":"kubernetes-dashboard","ports":[{"containerPort":8443,"protocol":"TCP"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"kubernetes-dashboard","serviceAccountName":"kubernetes-dashboard","terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:35:01Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"10.32.0.2","qosClass":"BestEffort","startTime":"2018-04-24T21:30:29Z"}} {"metadata":{"creationTimestamp":"2018-04-24T21:28:14Z","generateName":"kubernetes-dashboard-74959b9d6c-","labels":{"k8s-app":"kubernetes-dashboard","pod-template-hash":"3051565827"},"name":"kubernetes-dashboard-74959b9d6c-rfdr4","namespace":"kube-system","resourceVersion":"803","selfLink":"/api/v1/namespaces/kube-system/pods/kubernetes-dashboard-74959b9d6c-rfdr4","uid":"65703604-4806-11e8-9447-b827ebcfd0f3"},"spec":{"containers":[{"image":"k8s.gcr.io/kubernetes-dashboard-arm:v1.8.3","imagePullPolicy":"IfNotPresent","name":"kubernetes-dashboard","ports":[{"containerPort":8443,"protocol":"TCP"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","nodeName":"mon","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"kubernetes-dashboard","serviceAccountName":"kubernetes-dashboard","terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:35:01Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-04-24T21:30:29Z","status":"True","type":"PodScheduled"}],"hostIP":"192.168.11.132","phase":"Running","podIP":"10.32.0.2","qosClass":"BestEffort","startTime":"2018-04-24T21:30:29Z"}}

Is this the way to fix this?

Patch K8S Vulnerability

OS running on Ansible host:

NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

Ansible Version (ansible --version):

ansible 2.5.5
  config file = /media/psf/Home/git/personal/raspk8s/ansible.cfg
  configured module search path = [u'/home/jim/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]

Uploaded logs showing errors(rak8s/.log/ansible.log)

Raspberry Pi Hardware Version:

ANY

Raspberry Pi OS & Version (cat /etc/os-release):

Any

Detailed description of the issue:

K8S has a vulnerability in the version that is pinned in groupvar/all.yml:
https://github.com/rak8s/rak8s/blob/ecbfe7ad387873f26e9a5d7f0d51c5f4e9e3d7e9/group_vars/all.yml#L15

kubernetes/kubernetes#71411 - K8S Issue

K8S has patched this vulnerability in:

v1.10.11
v1.11.5
v1.12.3
v1.13.0+

Unable to Initialize master due to docker version

OS running on Ansible host:

Mac OS 10.14.1 (Mojave)

Ansible Version (ansible --version):

ansible 2.7.2 (homebrew)

Uploaded logs showing errors(rak8s/.log/ansible.log)

Raspberry Pi Hardware Version:

3B, 3B+

Raspberry Pi OS & Version (cat /etc/os-release):

PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

Detailed description of the issue:

When installing, roles/kubeadm/tasks/main.yml installs docker-ce (version 18.04.0ce3-0~raspbian), which installs /etc/apt/sources.list.d/docker.list. If the process hangs, hits a hiccup, or for whatever reason needs to be restarted, an apt-get upgrade is called, which bumps docker to the latest version. The pre-flight for kubeadm init then fails due to unsupported docker version.

Build automated release pipeline

OS running on Ansible host:

NA

Ansible Version (ansible --version):

NA

Uploaded logs showing errors(rak8s/.log/ansible.log)

NA

Raspberry Pi Hardware Version:

NA

Raspberry Pi OS & Version (cat /etc/os-release):

NA

Detailed description of the issue:

It would be amazing if we could use Codefresh, Travis, Drone, Circle, GH Actions, etc. to build new releases on merges to master. This is a low level of effort, high gain for the community and would be greatly appreciated.

Cannot ping all nodes

ansible -m ping all fails after adding my nodes to my inventory file:

black | UNREACHABLE! => { "changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true }

I had to put ansible_ssh_pass=PASSWORD in my inventory file, and install "sshpass" for this task to succeed

How to add a node after initial bootstrap of the cluster?

Hi,

I just used rak8s to setup a cluster without any hassle 👍
Glad to have found this project, really appreciate the effor, thanks.

I was wondering on how one would go ahead and add nodes after the initial bootstrap of the cluster.
Is it that easy, as adding a node to the inventory and running ansible-playbook cluster.yml again?

What would the preferred procedure be?

K8S Join Token should be randomized

OS running on Ansible host:

NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

Ansible Version (ansible --version):

ansible 2.5.5
  config file = /media/psf/Home/git/personal/raspk8s/ansible.cfg
  configured module search path = [u'/home/jim/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]

Uploaded logs showing errors(rak8s/.log/ansible.log)

Raspberry Pi Hardware Version:

B+

Raspberry Pi OS & Version (cat /etc/os-release):

Any

Detailed description of the issue:

groupvars/all.yml has the k8s join token hardcoded, this should be updated to use something like Ansible Password to generate and store on the users local system

pod-network-cidr not correct for flannel

OS running on Ansible host:

Raspbian 9.4

Ansible Version (ansible --version):

2.7.1

Uploaded logs showing errors(rak8s/.log/ansible.log)

Raspberry Pi Hardware Version:

3 Model B

Raspberry Pi OS & Version (cat /etc/os-release):

Raspbian lite 9.4

Detailed description of the issue:

A container running blinkt-k8s-controller was unable to access the kubernetes API server on 10.96.0.1.

This was caused by a incorrect pod-network-cidr in roles/master/tasks/main.yml. The correct pod-network-cidr for flannel is 10.244.0.0/16.

Dashboard not installed

Despite the README.md file stating that the dashboard is installed, this is not the case.

I will submit a PR to add the install instruction to the README.

TASK [master : Initialize Master v1.14.1] Fails

OS running on Ansible host:

macOS 10.14.4

Ansible Version (ansible --version):

ansible 2.7.10
config file = None
configured module search path = ['/Users/peiman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.7.10/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Mar 29 2019, 15:51:26) [Clang 10.0.1 (clang-1001.0.46.3)]

Uploaded logs showing errors(rak8s/.log/ansible.log)

2019-04-20 10:13:47,555 p=21017 u=peiman | TASK [master : Initialize Master v1.14.1] ****************************************************************************************************************************************************************************************************
2019-04-20 10:21:26,984 p=21017 u=peiman | fatal: [rak8s000]: FAILED! => {"changed": true, "cmd": "kubeadm init --apiserver-advertise-address=192.168.1.60 --token=udy29x.ugyyk3tumg27atmr --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16", "delta": "0:07:38.901933", "end": "2019-04-20 08:21:26.902845", "msg": "non-zero return code", "rc": 1, "start": "2019-04-20 08:13:48.000912", "stderr": "\t[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster", "stderr_lines": ["\t[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/", "error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster"], "stdout": "[init] Using Kubernetes version: v1.14.1\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"\n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder "/etc/kubernetes/pki"\n[certs] Generating "etcd/ca" certificate and key\n[certs] Generating "etcd/healthcheck-client" certificate and key\n[certs] Generating "apiserver-etcd-client" certificate and key\n[certs] Generating "etcd/server" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [rak8s000 localhost] and IPs [192.168.1.60 127.0.0.1 ::1]\n[certs] Generating "etcd/peer" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [rak8s000 localhost] and IPs [192.168.1.60 127.0.0.1 ::1]\n[certs] Generating "ca" certificate and key\n[certs] Generating "apiserver" certificate and key\n[certs] apiserver serving cert is signed for DNS names [rak8s000 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.60]\n[certs] Generating "apiserver-kubelet-client" certificate and key\n[certs] Generating "front-proxy-ca" certificate and key\n[certs] Generating "front-proxy-client" certificate and key\n[certs] Generating "sa" key and public key\n[kubeconfig] Using kubeconfig folder "/etc/kubernetes"\n[kubeconfig] Writing "admin.conf" kubeconfig file\n[kubeconfig] Writing "kubelet.conf" kubeconfig file\n[kubeconfig] Writing "controller-manager.conf" kubeconfig file\n[kubeconfig] Writing "scheduler.conf" kubeconfig file\n[control-plane] Using manifest folder "/etc/kubernetes/manifests"\n[control-plane] Creating static Pod manifest for "kube-apiserver"\n[control-plane] Creating static Pod manifest for "kube-controller-manager"\n[control-plane] Creating static Pod manifest for "kube-scheduler"\n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.\nHere is one example how you may list all Kubernetes containers running in docker:\n\t- 'docker ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'docker logs CONTAINERID'", "stdout_lines": ["[init] Using Kubernetes version: v1.14.1", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"", "[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"", "[kubelet-start] Activating the kubelet service", "[certs] Using certificateDir folder "/etc/kubernetes/pki"", "[certs] Generating "etcd/ca" certificate and key", "[certs] Generating "etcd/healthcheck-client" certificate and key", "[certs] Generating "apiserver-etcd-client" certificate and key", "[certs] Generating "etcd/server" certificate and key", "[certs] etcd/server serving cert is signed for DNS names [rak8s000 localhost] and IPs [192.168.1.60 127.0.0.1 ::1]", "[certs] Generating "etcd/peer" certificate and key", "[certs] etcd/peer serving cert is signed for DNS names [rak8s000 localhost] and IPs [192.168.1.60 127.0.0.1 ::1]", "[certs] Generating "ca" certificate and key", "[certs] Generating "apiserver" certificate and key", "[certs] apiserver serving cert is signed for DNS names [rak8s000 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.60]", "[certs] Generating "apiserver-kubelet-client" certificate and key", "[certs] Generating "front-proxy-ca" certificate and key", "[certs] Generating "front-proxy-client" certificate and key", "[certs] Generating "sa" key and public key", "[kubeconfig] Using kubeconfig folder "/etc/kubernetes"", "[kubeconfig] Writing "admin.conf" kubeconfig file", "[kubeconfig] Writing "kubelet.conf" kubeconfig file", "[kubeconfig] Writing "controller-manager.conf" kubeconfig file", "[kubeconfig] Writing "scheduler.conf" kubeconfig file", "[control-plane] Using manifest folder "/etc/kubernetes/manifests"", "[control-plane] Creating static Pod manifest for "kube-apiserver"", "[control-plane] Creating static Pod manifest for "kube-controller-manager"", "[control-plane] Creating static Pod manifest for "kube-scheduler"", "[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"", "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s", "[kubelet-check] Initial timeout of 40s passed.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.", "", "Unfortunately, an error has occurred:", "\ttimed out waiting for the condition", "", "This error is likely caused by:", "\t- The kubelet is not running", "\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)", "", "If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:", "\t- 'systemctl status kubelet'", "\t- 'journalctl -xeu kubelet'", "", "Additionally, a control plane component may have crashed or exited when started by the container runtime.", "To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.", "Here is one example how you may list all Kubernetes containers running in docker:", "\t- 'docker ps -a | grep kube | grep -v pause'", "\tOnce you have found the failing container, you can inspect its logs with:", "\t- 'docker logs CONTAINERID'"]}

Raspberry Pi Hardware Version:

5 x Raspberry Pi 3 Model B Rev 1.2

Raspberry Pi OS & Version (cat /etc/os-release):

PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

Detailed description of the issue:

Here is my inventory file:
[dev]

[prod]
rak8s000 ansible_host=192.168.1.60
rak8s001 ansible_host=192.168.1.61
rak8s002 ansible_host=192.168.1.62
rak8s003 ansible_host=192.168.1.63
rak8s004 ansible_host=192.168.1.64

[master]
rak8s000


I ran the cleanup.yml and then cluster.yml and then received the error that you can see above in the ansible log.

rak8s git# I used: d1b14ec

Cannot ping to pi's with ansible.

I just got my pi's up and ready to start running kubernetes. The only thing is I've never used ansible so when I install it on my machine and configure the inventory, then ping, I get this.

pik8s000 | UNREACHABLE! => {
    "changed": false, 
    "msg": "SSH Error: data could not be sent to remote host \"192.168.1.164\". Make sure this host can be reached over ssh", 
    "unreachable": true
}
pik8s003 | UNREACHABLE! => {
    "changed": false, 
    "msg": "SSH Error: data could not be sent to remote host \"192.168.1.167\". Make sure this host can be reached over ssh", 
    "unreachable": true
}
pik8s002 | UNREACHABLE! => {
    "changed": false, 
    "msg": "SSH Error: data could not be sent to remote host \"192.168.1.166\". Make sure this host can be reached over ssh", 
    "unreachable": true
}
pik8s001 | UNREACHABLE! => {
    "changed": false, 
    "msg": "SSH Error: data could not be sent to remote host \"192.168.1.165\". Make sure this host can be reached over ssh", 
    "unreachable": true
}

Any idea why?

Master node cannot join cluster

Thanks to @chris-short PR it did fix our problem to init master, but right after when master tries to join kubernetes cluster, it just hangs.

TASK [master : Join Kubernetes Cluster] ***************************************************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|succeeded` 
instead use `result is succeeded`. This feature will be removed in version 2.9. Deprecation warnings 
can be disabled by setting deprecation_warnings=False in ansible.cfg.

@asachs01 Did you ever get past this problem as well?

The docker install script fails

The docker install script fails with the below exception

TASK [kubeadm : Run Docker Install Script]

"stdout_lines": [
        "", 
        "# Executing docker install script, commit: 1d31602", 
        "+ sh -c apt-get update -qq >/dev/null", 
        "+ sh -c apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null", 
        "+ sh -c curl -fsSL \"https://download.docker.com/linux/raspbian/gpg\" | apt-key add -qq - >/dev/null", 
        "Warning: apt-key output should not be parsed (stdout is not a terminal)", 
        "+ sh -c echo \"deb [arch=armhf] https://download.docker.com/linux/raspbian stretch edge\" > /etc/apt/sources.list.d/docker.list", 
        "+ [ raspbian = debian ]", 
        "+ sh -c apt-get update -qq >/dev/null", 
        "+ sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null", 
        "E: Sub-process /usr/bin/dpkg returned an error code (1)"
    ]

Initialize Master failing

OS running on Ansible host: Ubuntu 16.04

Ansible Version (ansible --version): 2.5.3

Uploaded logs showing errors(rak8s/.log/ansible.log)

TASK [master : Reset Kubernetes Master] ******************************************************************
changed: [kube-master]

TASK [master : Initialize Master] ************************************************************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|succeeded` instead
use `result is succeeded`. This feature will be removed in version 2.9. Deprecation warnings can be
disabled by setting deprecation_warnings=False in ansible.cfg.
fatal: [kube-master]: FAILED! => {"changed": true, "cmd": "kubeadm init --apiserver-advertise-address=192.168.1.5 --token=udy29x.ugyyk3tumg27atmr", "delta": "0:00:02.819967", "end": "2018-05-28 17:28:53.537368", "msg": "non-zero return code", "rc": 2, "start": "2018-05-28 17:28:50.717401", "stderr": "\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03\n\t[WARNING FileExisting-crictl]: crictl not found in system path\nSuggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl\n[preflight] Some fatal errors occurred:\n\t[ERROR KubeletVersion]: couldn't get kubelet version: exit status 2\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`", "stderr_lines": ["\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03", "\t[WARNING FileExisting-crictl]: crictl not found in system path", "Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl", "[preflight] Some fatal errors occurred:", "\t[ERROR KubeletVersion]: couldn't get kubelet version: exit status 2", "[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`"], "stdout": "[init] Using Kubernetes version: v1.10.3\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks.", "stdout_lines": ["[init] Using Kubernetes version: v1.10.3", "[init] Using Authorization modes: [Node RBAC]", "[preflight] Running pre-flight checks."]}

PLAY RECAP ***********************************************************************************************
kube-master                : ok=15   changed=11   unreachable=0    failed=1

Raspberry Pi Hardware Version: RPi 3B+

Raspberry Pi OS & Version (cat /etc/os-release): Raspbian GNU/Linux 9 (stretch)

Detailed description of the issue:

Receive the above logs on a fresh install on the master. I haven't played around with it yet but figured I would let you know.

Suggestion: Create Slack Channel for Support-y Issues

@chris-short Love the work you've been doing on Rak8s. One suggestion I have is to create a Slack channel that way the repo doesn't continue to get flooded by support requests that don't exactly have to do with the playbook itself. Might be better to use Slack to answer issues about k8s/Docker/ansible than the repo. ¯_(ツ)_/¯

coredns gets killed by oom

I deployed a cluster with 3 nodes (master + 2 worker nodes).

After the deployment is finished, the cluster is fine, but when I schedule services, the two coredns pods get killed from out-of-memory (oom). That makes coredns going into a crash-loop.

I'm not sure, whether this I a config problem on my side, but I fixed it with using kube-dns [1] instead of coredns by passing --feature-gates=CoreDNS=false.

[1] https://kubernetes.io/docs/tasks/administer-cluster/coredns/#installing-kube-dns-instead-of-coredns-with-kubeadm

Docker Version: 18.04.0-ce
K8s: v1.11.4

Nodes not ready on raspberrypi

OS running on Ansible host:

Linux Mint 18

Ansible Version (ansible --version):

2.5.1

Uploaded logs showing errors(rak8s/.log/ansible.log)

n/a

Raspberry Pi Hardware Version:

Raspi 3 B

Raspberry Pi OS & Version (cat /etc/os-release):

PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian

Detailed description of the issue:

I've set up 3 raspis with the 2018-03-13-raspbian-stretch-lite.img
and the ansible scripts with tag 0.1.5 of this repo.
After a few reboots the kubectl works but
"sudo kubectl get nodes"
reports the master node and a worker node notReady.
on the master node "kubectl describe node ..." reports the following

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized.

After that I directly on the master node tried "sudo kubeadm init" (just in order to check what happens)
and I get

WARNING: [init] Using Kubernetes version: v1.10.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-10251]: Port 10251 is in use
[ERROR Port-10252]: Port 10252 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-2379]: Port 2379 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
CPU hardcapping unsupported

By the way: During the setup with the ansible scripts I also got the Bug #26
I then executed the regarding command directly on my master node and that succeeded.
After that I was able to rerun the playbook.
I am running the playbook from a laptop "outside" the raspis.

Dashboard gets 404 when attempting to get the yaml from github

OS running on Ansible host:

MacOS High Sierra

Ansible Version (ansible --version):

ansible 2.9.3

Uploaded logs showing errors(rak8s/.log/ansible.log)

{"changed": true, "cmd": "kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard-arm.yaml", "delta": "0:00:07.697219", "end": "2020-08-20 02:55:48.938354", "msg": "non-zero return code", "rc": 1, "start": "2020-08-20 02:55:41.241135", "stderr": "error: unable to read URL \"https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard-arm.yaml\", server reported 404 Not Found, status code=404", "stderr_lines": ["error: unable to read URL \"https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard-arm.yaml\", server reported 404 Not Found, status code=404"], "stdout": "", "stdout_lines": []}
[dashboard.log](https://github.com/rak8s/rak8s/files/5103269/dashboard.log)

Raspberry Pi Hardware Version:

Raspberry Pi 4 (2GB)

Raspberry Pi OS & Version (cat /etc/os-release):

Ubuntu 20.04

Detailed description of the issue:

Dashboard gets 404 when attempting to get the yaml from github
https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard-arm.yaml

Initialize master fails with systemctl status docker.service

I tried to install a cluster of newly installed pis with raspian lite with releasedate 2018-03-13.
After adding "become: yes" to the cluster.yml it works until the Task [master: Initialize master].
This task fails.
My assumption is that docker 18.04 is installed but kubernetes only works with max 17.03.

It fails with the following error:
fatal: [raspic0]: FAILED! => {"changed": true, "cmd": "kubeadm init --apiserver-advertise-address=192.168.1.104 --token=udy29x.ugyyk3tumg27atmr", "delta": "0:00:02.779513", "end": "2018-04-11 20:46:00.398347", "failed": true, "rc": 2, "start": "2018-04-11 20:45:57.618834", "stderr": "\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03\n\t[WARNING FileExisting-crictl]: crictl not found in system path\nSuggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl\n[preflight] Some fatal errors occurred:\n\t[ERROR SystemVerification]: missing cgroups: memory\n[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...", "stderr_lines": ["\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03", "\t[WARNING FileExisting-crictl]: crictl not found in system path", "Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl", "[preflight] Some fatal errors occurred:", "\t[ERROR SystemVerification]: missing cgroups: memory", "[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=..."], "stdout": "[init] Using Kubernetes version: v1.10.0\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks.\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m4.14.30-v7+\u001b[0m\n\u001b[0;37mCONFIG_NAMESPACES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCONFIG_NET_NS\u001b[0m: ...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.