kylix3511.mylabserver.com >>>> ย BOX_OS=centos KUBERNETES_VERSION=1.13.3 CLUSTER_NAME=k8s-centos NODE_MEMORY_SIZE_GB=2 MASTER_MEMORY_SIZE_GB=3 NODE_CPUS=2 MASTER_CPUS=2 NODE_COUNT=2 make -j8 up
if !(vagrant box list | grep -q generic/centos7); then \
vagrant \
box \
add \
--provider=virtualbox \
generic/centos7; \
else \
vagrant box update --box=generic/centos7; \
fi
Checking for updates to 'generic/centos7'
Latest installed version: 1.9.2
Version constraints: > 1.9.2
Provider: virtualbox
Box 'generic/centos7' (v1.9.2) is running the latest version.
vagrant up
NODE=1 vagrant up
NODE=2 vagrant up
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'master' up with 'virtualbox' provider...
==> node2: Importing base box 'generic/centos7'...
==> node1: Importing base box 'generic/centos7'...
==> master: Importing base box 'generic/centos7'...
==> node2: Matching MAC address for NAT networking...
==> node1: Matching MAC address for NAT networking...
==> node2: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> node1: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> node2: Setting the name of the VM: k8s-vagrant-multi-node_node2_1550706372510_1712
==> node1: Setting the name of the VM: k8s-vagrant-multi-node_node1_1550706372602_27448
==> master: Setting the name of the VM: k8s-vagrant-multi-node_master_1550706372913_72605
==> node1: Clearing any previously set network interfaces...
==> master: Fixed port collision for 22 => 2222. Now on port 2200.
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
master: Adapter 1: nat
master: Adapter 2: hostonly
==> master: Forwarding ports...
master: 22 (guest) => 2200 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> node1: Preparing network interfaces based on configuration...
node1: Adapter 1: nat
node1: Adapter 2: hostonly
==> node1: Forwarding ports...
node1: 22 (guest) => 2222 (host) (adapter 1)
==> node1: Running 'pre-boot' VM customizations...
==> master: Waiting for machine to boot. This may take a few minutes...
==> node2: Fixed port collision for 22 => 2222. Now on port 2201.
==> node2: Clearing any previously set network interfaces...
==> node1: Booting VM...
master: SSH address: 127.0.0.1:2200
master: SSH username: vagrant
master: SSH auth method: private key
==> node2: Preparing network interfaces based on configuration...
node2: Adapter 1: nat
node2: Adapter 2: hostonly
==> node2: Forwarding ports...
node2: 22 (guest) => 2201 (host) (adapter 1)
==> node1: Waiting for machine to boot. This may take a few minutes...
==> node2: Running 'pre-boot' VM customizations...
node1: SSH address: 127.0.0.1:2222
node1: SSH username: vagrant
node1: SSH auth method: private key
==> node2: Booting VM...
==> node2: Waiting for machine to boot. This may take a few minutes...
node2: SSH address: 127.0.0.1:2201
node2: SSH username: vagrant
node2: SSH auth method: private key
master: Warning: Remote connection disconnect. Retrying...
master: Warning: Connection reset. Retrying...
node1: Warning: Connection reset. Retrying...
node2: Warning: Connection reset. Retrying...
node1: Warning: Remote connection disconnect. Retrying...
master: Warning: Connection reset. Retrying...
node1: Warning: Connection reset. Retrying...
node2: Warning: Connection reset. Retrying...
node2: Warning: Remote connection disconnect. Retrying...
master: Warning: Connection reset. Retrying...
node1: Warning: Connection reset. Retrying...
node2: Warning: Connection reset. Retrying...
master:
master: Vagrant insecure key detected. Vagrant will automatically replace
master: this with a newly generated keypair for better security.
node1:
node1: Vagrant insecure key detected. Vagrant will automatically replace
node1: this with a newly generated keypair for better security.
node2:
node2: Vagrant insecure key detected. Vagrant will automatically replace
node2: this with a newly generated keypair for better security.
master:
master: Inserting generated public key within guest...
master: Removing insecure key from the guest if it's present...
node1:
node1: Inserting generated public key within guest...
master: Key inserted! Disconnecting and reconnecting using new SSH key...
node1: Removing insecure key from the guest if it's present...
node1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node1: Machine booted and ready!
==> node1: Checking for guest additions in VM...
node1: The guest additions on this VM do not match the installed version of
node1: VirtualBox! In most cases this is fine, but in rare cases it can
node1: prevent things such as shared folders from working properly. If you see
node1: shared folder errors, please make sure the guest additions within the
node1: virtual machine match the version of VirtualBox you have installed on
node1: your host and reload your VM.
node1:
node1: Guest Additions Version: 5.1.38
node1: VirtualBox Version: 6.0
==> node1: Setting hostname...
==> master: Machine booted and ready!
==> master: Checking for guest additions in VM...
master: The guest additions on this VM do not match the installed version of
master: VirtualBox! In most cases this is fine, but in rare cases it can
master: prevent things such as shared folders from working properly. If you see
master: shared folder errors, please make sure the guest additions within the
master: virtual machine match the version of VirtualBox you have installed on
master: your host and reload your VM.
master:
master: Guest Additions Version: 5.1.38
master: VirtualBox Version: 6.0
==> master: Setting hostname...
node2:
node2: Inserting generated public key within guest...
node2: Removing insecure key from the guest if it's present...
node2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node1: Configuring and enabling network interfaces...
==> master: Configuring and enabling network interfaces...
==> node2: Machine booted and ready!
==> node2: Checking for guest additions in VM...
node2: The guest additions on this VM do not match the installed version of
node2: VirtualBox! In most cases this is fine, but in rare cases it can
node2: prevent things such as shared folders from working properly. If you see
node2: shared folder errors, please make sure the guest additions within the
node2: virtual machine match the version of VirtualBox you have installed on
node2: your host and reload your VM.
node2:
node2: Guest Additions Version: 5.1.38
node2: VirtualBox Version: 6.0
==> node2: Setting hostname...
==> node2: Configuring and enabling network interfaces...
==> node1: Installing rsync to the VM...
==> master: Installing rsync to the VM...
==> node2: Installing rsync to the VM...
==> node2: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-node2/ => /data
==> node2: Running provisioner: shell...
node2: Running: inline script
node2: net.ipv6.conf.all.disable_ipv6 = 0
node2: net.ipv6.conf.default.disable_ipv6 = 0
node2: net.ipv6.conf.lo.disable_ipv6 = 0
node2: net.ipv6.conf.all.accept_dad = 0
node2: net.ipv6.conf.default.accept_dad = 0
node2: net.bridge.bridge-nf-call-iptables = 1
node2: Created symlink from /etc/systemd/system/default.target.wants/ip-set-mtu.service to /etc/systemd/system/ip-set-mtu.service.
==> node1: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-node1/ => /data
==> master: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-master/ => /data
==> node1: Running provisioner: shell...
==> master: Running provisioner: shell...
node1: Running: inline script
master: Running: inline script
node1: net.ipv6.conf.all.disable_ipv6 = 0
node1: net.ipv6.conf.default.disable_ipv6 = 0
node1: net.ipv6.conf.lo.disable_ipv6 = 0
node1: net.ipv6.conf.all.accept_dad = 0
node1: net.ipv6.conf.default.accept_dad = 0
node1: net.bridge.bridge-nf-call-iptables = 1
node1: Created symlink from /etc/systemd/system/default.target.wants/ip-set-mtu.service to /etc/systemd/system/ip-set-mtu.service.
master: net.ipv6.conf.all.disable_ipv6 = 0
master: net.ipv6.conf.default.disable_ipv6 = 0
master: net.ipv6.conf.lo.disable_ipv6 = 0
master: net.ipv6.conf.all.accept_dad = 0
master: net.ipv6.conf.default.accept_dad = 0
master: net.bridge.bridge-nf-call-iptables = 1
master: Created symlink from /etc/systemd/system/default.target.wants/ip-set-mtu.service to /etc/systemd/system/ip-set-mtu.service.
==> node2: Running provisioner: shell...
node2: Running: inline script
node2: ++ cat
node2: ++ '[' -n 1.13.3 ']'
node2: ++ KUBERNETES_PACKAGES='kubelet-1.13.3 kubeadm-1.13.3'
node2: ++ setenforce 0
node2: ++ sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config
node2: ++ yum clean expire-cache
node2: Loaded plugins: fastestmirror
node2: Cleaning repos: base epel extras kubernetes updates
node2: 7 metadata files removed
node2: ++ yum install --nogpgcheck -y net-tools screen tree telnet conntrack socat docker rsync kubelet-1.13.3 kubeadm-1.13.3
node2: Loaded plugins: fastestmirror
node2: Loading mirror speeds from cached hostfile
node2: * base: mirror.fileplanet.com
node2: * epel: sjc.edge.kernel.org
node2: * extras: sjc.edge.kernel.org
node2: * updates: mirror.fileplanet.com
==> node1: Running provisioner: shell...
==> master: Running provisioner: shell...
node1: Running: inline script
node1: ++ cat
node1: ++ '[' -n 1.13.3 ']'
node1: ++ KUBERNETES_PACKAGES='kubelet-1.13.3 kubeadm-1.13.3'
node1: ++ setenforce 0
node1: ++ sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config
node1: ++ yum clean expire-cache
master: Running: inline script
master: ++ cat
master: ++ '[' -n 1.13.3 ']'
master: ++ KUBERNETES_PACKAGES='kubelet-1.13.3 kubeadm-1.13.3'
master: ++ setenforce 0
master: ++ sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config
master: ++ yum clean expire-cache
node1: Loaded plugins: fastestmirror
node1: Cleaning repos: base epel extras kubernetes updates
node1: 7 metadata files removed
node1: ++ yum install --nogpgcheck -y net-tools screen tree telnet conntrack socat docker rsync kubelet-1.13.3 kubeadm-1.13.3
master: Loaded plugins: fastestmirror
master: Cleaning repos: base epel extras kubernetes updates
master: 7 metadata files removed
master: ++ yum install --nogpgcheck -y net-tools screen tree telnet conntrack socat docker rsync kubelet-1.13.3 kubeadm-1.13.3
node1: Loaded plugins: fastestmirror
node2: Package net-tools-2.0-0.24.20131004git.el7.x86_64 already installed and latest version
node2: Package 1:telnet-0.17-64.el7.x86_64 already installed and latest version
node2: Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
node1: Loading mirror speeds from cached hostfile
master: Loaded plugins: fastestmirror
master: Loading mirror speeds from cached hostfile
node2: Resolving Dependencies
node2: --> Running transaction check
node2: ---> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed
node2: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node2: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node2: --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node2: --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node2: --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node2: --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node2: ---> Package docker.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
node2: --> Processing Dependency: docker-common = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
node2: --> Processing Dependency: docker-client = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
node2: --> Processing Dependency: subscription-manager-rhsm-certificates for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
node2: ---> Package kubeadm.x86_64 0:1.13.3-0 will be installed
node2: --> Processing Dependency: kubernetes-cni >= 0.6.0 for package: kubeadm-1.13.3-0.x86_64
node2: --> Processing Dependency: kubectl >= 1.6.0 for package: kubeadm-1.13.3-0.x86_64
node2: --> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.13.3-0.x86_64
node2: ---> Package kubelet.x86_64 0:1.13.3-0 will be installed
node2: ---> Package screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7 will be installed
node2: ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
node2: ---> Package tree.x86_64 0:1.6.0-10.el7 will be installed
node2: --> Running transaction check
node2: ---> Package cri-tools.x86_64 0:1.12.0-0 will be installed
node2: ---> Package docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
node2: ---> Package docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
node2: --> Processing Dependency: skopeo-containers >= 1:0.1.26-2 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node2: --> Processing Dependency: oci-umount >= 2:2.3.3-3 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node2: --> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node2: --> Processing Dependency: oci-register-machine >= 1:0-5.13 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node2: --> Processing Dependency: container-storage-setup >= 0.9.0-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node2: --> Processing Dependency: container-selinux >= 2:2.51-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node2: --> Processing Dependency: atomic-registries for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node2: ---> Package kubectl.x86_64 0:1.13.3-0 will be installed
node2: ---> Package kubernetes-cni.x86_64 0:0.6.0-0 will be installed
node2: ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed
node2: ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed
node2: ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
node2: ---> Package subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos will be installed
node2: --> Running transaction check
node2: ---> Package atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos will be installed
node2: --> Processing Dependency: python-yaml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
node2: --> Processing Dependency: python-setuptools for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
node2: --> Processing Dependency: python-pytoml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
node2: ---> Package container-selinux.noarch 2:2.74-1.el7 will be installed
node2: --> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.74-1.el7.noarch
node2: ---> Package container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7 will be installed
node2: ---> Package containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos will be installed
node2: ---> Package oci-register-machine.x86_64 1:0-6.git2b44233.el7 will be installed
node2: ---> Package oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 will be installed
node2: --> Processing Dependency: libyajl.so.2()(64bit) for package: 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64
node2: ---> Package oci-umount.x86_64 2:2.3.4-2.git87f9237.el7 will be installed
node2: --> Running transaction check
node2: ---> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
node2: --> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
node2: ---> Package policycoreutils-python.x86_64 0:2.5-29.el7_6.1 will be installed
node2: --> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: libcgroup for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: --> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node2: ---> Package python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 will be installed
node2: ---> Package python-setuptools.noarch 0:0.9.8-7.el7 will be installed
node2: --> Processing Dependency: python-backports-ssl_match_hostname for package: python-setuptools-0.9.8-7.el7.noarch
node2: ---> Package yajl.x86_64 0:2.0.4-4.el7 will be installed
node2: --> Running transaction check
node2: ---> Package audit-libs-python.x86_64 0:2.8.4-4.el7 will be installed
node2: ---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
node2: ---> Package libcgroup.x86_64 0:0.41-20.el7 will be installed
node2: ---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
node2: ---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
node2: ---> Package python-IPy.noarch 0:0.75-6.el7 will be installed
node2: ---> Package python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 will be installed
node2: --> Processing Dependency: python-ipaddress for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
node2: --> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
node2: ---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
node2: --> Running transaction check
node2: ---> Package python-backports.x86_64 0:1.0-8.el7 will be installed
node2: ---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed
node2: --> Finished Dependency Resolution
node2:
node2: Dependencies Resolved
node2:
node2: ================================================================================
node2: Package Arch Version Repository Size
node2: ================================================================================
node2: Installing:
node2: conntrack-tools x86_64 1.4.4-4.el7 base 186 k
node2: docker x86_64 2:1.13.1-91.git07f3374.el7.centos extras 18 M
node2: kubeadm x86_64 1.13.3-0 kubernetes 7.9 M
node2: kubelet x86_64 1.13.3-0 kubernetes 21 M
node2: screen x86_64 4.1.0-0.25.20120314git3c2946.el7 base 552 k
node2: socat x86_64 1.7.3.2-2.el7 base 290 k
node2: tree x86_64 1.6.0-10.el7 base 46 k
node2: Installing for dependencies:
node2: PyYAML x86_64 3.10-11.el7 base 153 k
node2: atomic-registries x86_64 1:1.22.1-26.gitb507039.el7.centos extras 35 k
node2: audit-libs-python x86_64 2.8.4-4.el7 base 76 k
node2: checkpolicy x86_64 2.5-8.el7 base 295 k
node2: container-selinux noarch 2:2.74-1.el7 extras 38 k
node2: container-storage-setup
node2: noarch 0.11.0-2.git5eaf76c.el7 extras 35 k
node2: containers-common x86_64 1:0.1.31-8.gitb0b750d.el7.centos extras 21 k
node2: cri-tools x86_64 1.12.0-0 kubernetes 4.2 M
node2: docker-client x86_64 2:1.13.1-91.git07f3374.el7.centos extras 3.9 M
node2: docker-common x86_64 2:1.13.1-91.git07f3374.el7.centos extras 95 k
node2: kubectl x86_64 1.13.3-0 kubernetes 8.5 M
node2: kubernetes-cni x86_64 0.6.0-0 kubernetes 8.6 M
node2: libcgroup x86_64 0.41-20.el7 base 66 k
node2: libnetfilter_cthelper
node2: x86_64 1.0.0-9.el7 base 18 k
node2: libnetfilter_cttimeout
node2: x86_64 1.0.0-6.el7 base 18 k
node2: libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
node2: libsemanage-python x86_64 2.5-14.el7 base 113 k
node2: libyaml x86_64 0.1.4-11.el7_0 base 55 k
node2: oci-register-machine x86_64 1:0-6.git2b44233.el7 extras 1.1 M
node2: oci-systemd-hook x86_64 1:0.1.18-3.git8787307.el7_6 extras 34 k
node2: oci-umount x86_64 2:2.3.4-2.git87f9237.el7 extras 32 k
node2: policycoreutils-python
node2: x86_64 2.5-29.el7_6.1 updates 456 k
node2: python-IPy noarch 0.75-6.el7 base 32 k
node2: python-backports x86_64 1.0-8.el7 base 5.8 k
node2: python-backports-ssl_match_hostname
node2: noarch 3.5.0.1-1.el7 base 13 k
node2: python-ipaddress noarch 1.0.16-2.el7 base 34 k
node2: python-pytoml noarch 0.1.14-1.git7dea353.el7 extras 18 k
node2: python-setuptools noarch 0.9.8-7.el7 base 397 k
node2: setools-libs x86_64 3.3.8-4.el7 base 620 k
node2: subscription-manager-rhsm-certificates
node2: x86_64 1.21.10-3.el7.centos updates 207 k
node2: yajl x86_64 2.0.4-4.el7 base 39 k
node2:
node2: Transaction Summary
node2: ================================================================================
node2: Install 7 Packages (+31 Dependent packages)
node2:
node2: Total download size: 76 M
node2: Installed size: 321 M
node2: Downloading packages:
node1: * base: mirror.fileplanet.com
node1: * epel: d2lzkl7pfhq30w.cloudfront.net
node1: * extras: mirrors.xtom.com
node1: * updates: mirror.fileplanet.com
master: * base: sjc.edge.kernel.org
master: * epel: mirror.prgmr.com
master: * extras: sjc.edge.kernel.org
master: * updates: mirror.keystealth.org
node1: Package net-tools-2.0-0.24.20131004git.el7.x86_64 already installed and latest version
node1: Package 1:telnet-0.17-64.el7.x86_64 already installed and latest version
node1: Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
node1: Resolving Dependencies
node1: --> Running transaction check
node1: ---> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed
node1: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
master: Package net-tools-2.0-0.24.20131004git.el7.x86_64 already installed and latest version
master: Package 1:telnet-0.17-64.el7.x86_64 already installed and latest version
master: Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
node1: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node1: --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node1: --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node1: --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node1: --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node1: ---> Package docker.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
node1: --> Processing Dependency: docker-common = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
node1: --> Processing Dependency: docker-client = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
node1: --> Processing Dependency: subscription-manager-rhsm-certificates for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
node1: ---> Package kubeadm.x86_64 0:1.13.3-0 will be installed
node1: --> Processing Dependency: kubernetes-cni >= 0.6.0 for package: kubeadm-1.13.3-0.x86_64
node1: --> Processing Dependency: kubectl >= 1.6.0 for package: kubeadm-1.13.3-0.x86_64
master: Resolving Dependencies
master: --> Running transaction check
master: ---> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed
master: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
node1: --> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.13.3-0.x86_64
node1: ---> Package kubelet.x86_64 0:1.13.3-0 will be installed
node1: ---> Package screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7 will be installed
node1: ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
node1: ---> Package tree.x86_64 0:1.6.0-10.el7 will be installed
node1: --> Running transaction check
node1: ---> Package cri-tools.x86_64 0:1.12.0-0 will be installed
node1: ---> Package docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
node1: ---> Package docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
node1: --> Processing Dependency: skopeo-containers >= 1:0.1.26-2 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node1: --> Processing Dependency: oci-umount >= 2:2.3.3-3 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node1: --> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node1: --> Processing Dependency: oci-register-machine >= 1:0-5.13 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node1: --> Processing Dependency: container-storage-setup >= 0.9.0-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node1: --> Processing Dependency: container-selinux >= 2:2.51-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node1: --> Processing Dependency: atomic-registries for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
node1: ---> Package kubectl.x86_64 0:1.13.3-0 will be installed
node1: ---> Package kubernetes-cni.x86_64 0:0.6.0-0 will be installed
node1: ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed
node1: ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed
node1: ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
node1: ---> Package subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos will be installed
node1: --> Running transaction check
node1: ---> Package atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos will be installed
node1: --> Processing Dependency: python-yaml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
node1: --> Processing Dependency: python-setuptools for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
node1: --> Processing Dependency: python-pytoml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
node1: ---> Package container-selinux.noarch 2:2.74-1.el7 will be installed
node1: --> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.74-1.el7.noarch
node1: ---> Package container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7 will be installed
node1: ---> Package containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos will be installed
node1: ---> Package oci-register-machine.x86_64 1:0-6.git2b44233.el7 will be installed
node1: ---> Package oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 will be installed
node1: --> Processing Dependency: libyajl.so.2()(64bit) for package: 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64
node1: ---> Package oci-umount.x86_64 2:2.3.4-2.git87f9237.el7 will be installed
node1: --> Running transaction check
node1: ---> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
node1: --> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
node1: ---> Package policycoreutils-python.x86_64 0:2.5-29.el7_6.1 will be installed
node1: --> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: libcgroup for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: --> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
node1: ---> Package python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 will be installed
node1: ---> Package python-setuptools.noarch 0:0.9.8-7.el7 will be installed
node1: --> Processing Dependency: python-backports-ssl_match_hostname for package: python-setuptools-0.9.8-7.el7.noarch
node1: ---> Package yajl.x86_64 0:2.0.4-4.el7 will be installed
node1: --> Running transaction check
node1: ---> Package audit-libs-python.x86_64 0:2.8.4-4.el7 will be installed
node1: ---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
node1: ---> Package libcgroup.x86_64 0:0.41-20.el7 will be installed
node1: ---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
node1: ---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
node1: ---> Package python-IPy.noarch 0:0.75-6.el7 will be installed
node1: ---> Package python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 will be installed
node1: --> Processing Dependency: python-ipaddress for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
node1: --> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
node1: ---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
node1: --> Running transaction check
node1: ---> Package python-backports.x86_64 0:1.0-8.el7 will be installed
node1: ---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed
master: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
master: --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
master: --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
master: --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
master: --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
master: ---> Package docker.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
master: --> Processing Dependency: docker-common = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
master: --> Processing Dependency: docker-client = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
master: --> Processing Dependency: subscription-manager-rhsm-certificates for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
master: ---> Package kubeadm.x86_64 0:1.13.3-0 will be installed
master: --> Processing Dependency: kubernetes-cni >= 0.6.0 for package: kubeadm-1.13.3-0.x86_64
master: --> Processing Dependency: kubectl >= 1.6.0 for package: kubeadm-1.13.3-0.x86_64
node1: --> Finished Dependency Resolution
master: --> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.13.3-0.x86_64
master: ---> Package kubelet.x86_64 0:1.13.3-0 will be installed
master: ---> Package screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7 will be installed
master: ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
master: ---> Package tree.x86_64 0:1.6.0-10.el7 will be installed
master: --> Running transaction check
master: ---> Package cri-tools.x86_64 0:1.12.0-0 will be installed
master: ---> Package docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
master: ---> Package docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
master: --> Processing Dependency: skopeo-containers >= 1:0.1.26-2 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
master: --> Processing Dependency: oci-umount >= 2:2.3.3-3 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
master: --> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
master: --> Processing Dependency: oci-register-machine >= 1:0-5.13 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
master: --> Processing Dependency: container-storage-setup >= 0.9.0-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
master: --> Processing Dependency: container-selinux >= 2:2.51-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
master: --> Processing Dependency: atomic-registries for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
master: ---> Package kubectl.x86_64 0:1.13.3-0 will be installed
master: ---> Package kubernetes-cni.x86_64 0:0.6.0-0 will be installed
master: ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed
master: ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed
master: ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
master: ---> Package subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos will be installed
master: --> Running transaction check
master: ---> Package atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos will be installed
master: --> Processing Dependency: python-yaml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
master: --> Processing Dependency: python-setuptools for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
master: --> Processing Dependency: python-pytoml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
master: ---> Package container-selinux.noarch 2:2.74-1.el7 will be installed
master: --> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.74-1.el7.noarch
master: ---> Package container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7 will be installed
master: ---> Package containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos will be installed
master: ---> Package oci-register-machine.x86_64 1:0-6.git2b44233.el7 will be installed
master: ---> Package oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 will be installed
master: --> Processing Dependency: libyajl.so.2()(64bit) for package: 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64
master: ---> Package oci-umount.x86_64 2:2.3.4-2.git87f9237.el7 will be installed
master: --> Running transaction check
master: ---> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
master: --> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
master: ---> Package policycoreutils-python.x86_64 0:2.5-29.el7_6.1 will be installed
master: --> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: libcgroup for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: --> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
master: ---> Package python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 will be installed
master: ---> Package python-setuptools.noarch 0:0.9.8-7.el7 will be installed
master: --> Processing Dependency: python-backports-ssl_match_hostname for package: python-setuptools-0.9.8-7.el7.noarch
master: ---> Package yajl.x86_64 0:2.0.4-4.el7 will be installed
master: --> Running transaction check
master: ---> Package audit-libs-python.x86_64 0:2.8.4-4.el7 will be installed
master: ---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
master: ---> Package libcgroup.x86_64 0:0.41-20.el7 will be installed
master: ---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
master: ---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
master: ---> Package python-IPy.noarch 0:0.75-6.el7 will be installed
master: ---> Package python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 will be installed
master: --> Processing Dependency: python-ipaddress for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
master: --> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
node1:
node1: Dependencies Resolved
master: ---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
master: --> Running transaction check
master: ---> Package python-backports.x86_64 0:1.0-8.el7 will be installed
master: ---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed
node1:
node1: ================================================================================
node1: Package Arch Version Repository Size
node1: ================================================================================
node1: Installing:
node1: conntrack-tools x86_64 1.4.4-4.el7 base 186 k
node1: docker x86_64 2:1.13.1-91.git07f3374.el7.centos extras 18 M
node1: kubeadm x86_64 1.13.3-0 kubernetes 7.9 M
node1: kubelet x86_64 1.13.3-0 kubernetes 21 M
node1: screen x86_64 4.1.0-0.25.20120314git3c2946.el7 base 552 k
node1: socat x86_64 1.7.3.2-2.el7 base 290 k
node1: tree x86_64 1.6.0-10.el7 base 46 k
node1: Installing for dependencies:
node1: PyYAML x86_64 3.10-11.el7 base 153 k
node1: atomic-registries x86_64 1:1.22.1-26.gitb507039.el7.centos extras 35 k
node1: audit-libs-python x86_64 2.8.4-4.el7 base 76 k
node1: checkpolicy x86_64 2.5-8.el7 base 295 k
node1: container-selinux noarch 2:2.74-1.el7 extras 38 k
node1: container-storage-setup
node1: noarch 0.11.0-2.git5eaf76c.el7 extras 35 k
node1: containers-common x86_64 1:0.1.31-8.gitb0b750d.el7.centos extras 21 k
node1: cri-tools x86_64 1.12.0-0 kubernetes 4.2 M
node1: docker-client x86_64 2:1.13.1-91.git07f3374.el7.centos extras 3.9 M
node1: docker-common x86_64 2:1.13.1-91.git07f3374.el7.centos extras 95 k
node1: kubectl x86_64 1.13.3-0 kubernetes 8.5 M
node1: kubernetes-cni x86_64 0.6.0-0 kubernetes 8.6 M
node1: libcgroup x86_64 0.41-20.el7 base 66 k
node1: libnetfilter_cthelper
node1: x86_64 1.0.0-9.el7 base 18 k
node1: libnetfilter_cttimeout
node1: x86_64 1.0.0-6.el7 base 18 k
node1: libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
node1: libsemanage-python x86_64 2.5-14.el7 base 113 k
node1: libyaml x86_64 0.1.4-11.el7_0 base 55 k
node1: oci-register-machine x86_64 1:0-6.git2b44233.el7 extras 1.1 M
node1: oci-systemd-hook x86_64 1:0.1.18-3.git8787307.el7_6 extras 34 k
node1: oci-umount x86_64 2:2.3.4-2.git87f9237.el7 extras 32 k
node1: policycoreutils-python
node1: x86_64 2.5-29.el7_6.1 updates 456 k
node1: python-IPy noarch 0.75-6.el7 base 32 k
node1: python-backports x86_64 1.0-8.el7 base 5.8 k
node1: python-backports-ssl_match_hostname
node1: noarch 3.5.0.1-1.el7 base 13 k
node1: python-ipaddress noarch 1.0.16-2.el7 base 34 k
node1: python-pytoml noarch 0.1.14-1.git7dea353.el7 extras 18 k
node1: python-setuptools noarch 0.9.8-7.el7 base 397 k
node1: setools-libs x86_64 3.3.8-4.el7 base 620 k
node1: subscription-manager-rhsm-certificates
node1: x86_64 1.21.10-3.el7.centos updates 207 k
node1: yajl x86_64 2.0.4-4.el7 base 39 k
node1:
node1: Transaction Summary
node1: ================================================================================
node1: Install 7 Packages (+31 Dependent packages)
node1: Total download size: 76 M
node1: Installed size: 321 M
node1: Downloading packages:
master: --> Finished Dependency Resolution
master:
master: Dependencies Resolved
master:
master: ================================================================================
master: Package Arch Version Repository Size
master: ================================================================================
master: Installing:
master: conntrack-tools x86_64 1.4.4-4.el7 base 186 k
master: docker x86_64 2:1.13.1-91.git07f3374.el7.centos extras 18 M
master: kubeadm x86_64 1.13.3-0 kubernetes 7.9 M
master: kubelet x86_64 1.13.3-0 kubernetes 21 M
master: screen x86_64 4.1.0-0.25.20120314git3c2946.el7 base 552 k
master: socat x86_64 1.7.3.2-2.el7 base 290 k
master: tree x86_64 1.6.0-10.el7 base 46 k
master: Installing for dependencies:
master: PyYAML x86_64 3.10-11.el7 base 153 k
master: atomic-registries x86_64 1:1.22.1-26.gitb507039.el7.centos extras 35 k
master: audit-libs-python x86_64 2.8.4-4.el7 base 76 k
master: checkpolicy x86_64 2.5-8.el7 base 295 k
master: container-selinux noarch 2:2.74-1.el7 extras 38 k
master: container-storage-setup
master: noarch 0.11.0-2.git5eaf76c.el7 extras 35 k
master: containers-common x86_64 1:0.1.31-8.gitb0b750d.el7.centos extras 21 k
master: cri-tools x86_64 1.12.0-0 kubernetes 4.2 M
master: docker-client x86_64 2:1.13.1-91.git07f3374.el7.centos extras 3.9 M
master: docker-common x86_64 2:1.13.1-91.git07f3374.el7.centos extras 95 k
master: kubectl x86_64 1.13.3-0 kubernetes 8.5 M
master: kubernetes-cni x86_64 0.6.0-0 kubernetes 8.6 M
master: libcgroup x86_64 0.41-20.el7 base 66 k
master: libnetfilter_cthelper
master: x86_64 1.0.0-9.el7 base 18 k
master: libnetfilter_cttimeout
master: x86_64 1.0.0-6.el7 base 18 k
master: libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
master: libsemanage-python x86_64 2.5-14.el7 base 113 k
master: libyaml x86_64 0.1.4-11.el7_0 base 55 k
master: oci-register-machine x86_64 1:0-6.git2b44233.el7 extras 1.1 M
master: oci-systemd-hook x86_64 1:0.1.18-3.git8787307.el7_6 extras 34 k
master: oci-umount x86_64 2:2.3.4-2.git87f9237.el7 extras 32 k
master: policycoreutils-python
master: x86_64 2.5-29.el7_6.1 updates 456 k
master: python-IPy noarch 0.75-6.el7 base 32 k
master: python-backports x86_64 1.0-8.el7 base 5.8 k
master: python-backports-ssl_match_hostname
master: noarch 3.5.0.1-1.el7 base 13 k
master: python-ipaddress noarch 1.0.16-2.el7 base 34 k
master: python-pytoml noarch 0.1.14-1.git7dea353.el7 extras 18 k
master: python-setuptools noarch 0.9.8-7.el7 base 397 k
master: setools-libs x86_64 3.3.8-4.el7 base 620 k
master: subscription-manager-rhsm-certificates
master: x86_64 1.21.10-3.el7.centos updates 207 k
master: yajl x86_64 2.0.4-4.el7 base 39 k
master:
master: Transaction Summary
master: ================================================================================
master: Install 7 Packages (+31 Dependent packages)
master: Total download size: 76 M
master: Installed size: 321 M
master: Downloading packages:
node2: --------------------------------------------------------------------------------
node2: Total 4.3 MB/s | 76 MB 00:17
node2: Running transaction check
node2: Running transaction test
node2: Transaction test succeeded
node2: Running transaction
node2: Installing : yajl-2.0.4-4.el7.x86_64 1/38
node2:
node2: Installing : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 2/38
node2:
node2: Installing : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 3/38
node1: --------------------------------------------------------------------------------
node1: Total 4.9 MB/s | 76 MB 00:15
node1: Running transaction check
node2:
node2: Installing : socat-1.7.3.2-2.el7.x86_64 4/38
node2:
node2: Installing : python-ipaddress-1.0.16-2.el7.noarch 5/38
node1: Running transaction test
node2:
node2: Installing : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6 6/38
node2:
node2: Installing : libyaml-0.1.4-11.el7_0.x86_64 7/38
node1: Transaction test succeeded
node1: Running transaction
node2:
node2: Installing : PyYAML-3.10-11.el7.x86_64 8/38
node2:
node2: Installing : audit-libs-python-2.8.4-4.el7.x86_64 9/38
node1: Installing : yajl-2.0.4-4.el7.x86_64 1/38
node2:
node2: Installing : python-backports-1.0-8.el7.x86_64 10/38
node2:
node2: Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 11/38
node1:
node1: Installing : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 2/38
node1:
node1: Installing : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 3/38
node1:
node1: Installing : socat-1.7.3.2-2.el7.x86_64 4/38
node1:
node1: Installing : python-ipaddress-1.0.16-2.el7.noarch 5/38
node1:
node1: Installing : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6 6/38
node2:
node2: Installing : python-setuptools-0.9.8-7.el7.noarch 12/38
node1:
node1: Installing : libyaml-0.1.4-11.el7_0.x86_64 7/38
node1:
node1: Installing : PyYAML-3.10-11.el7.x86_64 8/38
node1:
node1: Installing : audit-libs-python-2.8.4-4.el7.x86_64 9/38
node1:
node1: Installing : python-backports-1.0-8.el7.x86_64 10/38
node2:
node2: Installing : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 13/38
node1:
node1: Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 11/38
node2:
node2: Installing : libsemanage-python-2.5-14.el7.x86_64 14/38
node1:
node1: Installing : python-setuptools-0.9.8-7.el7.noarch 12/38
node1:
node1: Installing : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 13/38
node1:
node1: Installing : libsemanage-python-2.5-14.el7.x86_64 14/38
node2:
node2: Installing : kubectl-1.13.3-0.x86_64 15/38
node2:
node2: Installing : setools-libs-3.3.8-4.el7.x86_64 16/38
node2:
node2: Installing : python-pytoml-0.1.14-1.git7dea353.el7.noarch 17/38
node2:
node2: Installing : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_ 18/38
node2:
node2: Installing : python-IPy-0.75-6.el7.noarch 19/38
node2:
node2: Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 20/38
node1:
node1: Installing : kubectl-1.13.3-0.x86_64 15/38
node2:
node2: Installing : checkpolicy-2.5-8.el7.x86_64 21/38
node2:
node2: Installing : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen 22/38
node2:
node2: Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64 23/38
node1:
node1: Installing : setools-libs-3.3.8-4.el7.x86_64 16/38
node1:
node1: Installing : python-pytoml-0.1.14-1.git7dea353.el7.noarch 17/38
node1:
node1: Installing : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_ 18/38
node1:
node1: Installing : python-IPy-0.75-6.el7.noarch 19/38
node1:
node1: Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 20/38
node1:
node1: Installing : checkpolicy-2.5-8.el7.x86_64 21/38
node1:
node1: Installing : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen 22/38
node1:
node1: Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64 23/38
node2:
node2: Installing : cri-tools-1.12.0-0.x86_64 24/38
node2:
node2: Installing : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 25/38
node2:
node2: Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 26/38
node2:
node2: Installing : conntrack-tools-1.4.4-4.el7.x86_64 27/38
master: --------------------------------------------------------------------------------
master: Total 4.0 MB/s | 76 MB 00:18
master: Running transaction check
master: Running transaction test
node1:
node1: Installing : cri-tools-1.12.0-0.x86_64 24/38
master: Transaction test succeeded
master: Running transaction
node1:
node1: Installing : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 25/38
node1:
node1: Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 26/38
master: Installing : yajl-2.0.4-4.el7.x86_64 1/38
node1:
node1: Installing : conntrack-tools-1.4.4-4.el7.x86_64 27/38
master:
master: Installing : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 2/38
master:
master: Installing : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 3/38
master:
master: Installing : socat-1.7.3.2-2.el7.x86_64 4/38
master:
master: Installing : python-ipaddress-1.0.16-2.el7.noarch 5/38
master:
master: Installing : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6 6/38
master:
master: Installing : libyaml-0.1.4-11.el7_0.x86_64 7/38
master:
master: Installing : PyYAML-3.10-11.el7.x86_64 8/38
master:
master: Installing : audit-libs-python-2.8.4-4.el7.x86_64 9/38
master:
master: Installing : python-backports-1.0-8.el7.x86_64 10/38
master:
master: Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 11/38
master:
master: Installing : python-setuptools-0.9.8-7.el7.noarch 12/38
master:
master: Installing : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 13/38
master:
master: Installing : libsemanage-python-2.5-14.el7.x86_64 14/38
node2:
node2: Installing : kubernetes-cni-0.6.0-0.x86_64 28/38
node1:
node1: Installing : kubernetes-cni-0.6.0-0.x86_64 28/38
master:
master: Installing : kubectl-1.13.3-0.x86_64 15/38
master:
master: Installing : setools-libs-3.3.8-4.el7.x86_64 16/38
master:
master: Installing : python-pytoml-0.1.14-1.git7dea353.el7.noarch 17/38
master:
master: Installing : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_ 18/38
master:
master: Installing : python-IPy-0.75-6.el7.noarch 19/38
master:
master: Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 20/38
master:
master: Installing : checkpolicy-2.5-8.el7.x86_64 21/38
master:
master: Installing : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen 22/38
master:
master: Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64 23/38
master:
master: Installing : cri-tools-1.12.0-0.x86_64 24/38
master:
master: Installing : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 25/38
master:
master: Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 26/38
master:
master: Installing : conntrack-tools-1.4.4-4.el7.x86_64 27/38
node2:
node2: Installing : kubelet-1.13.3-0.x86_64 29/38
master:
master: Installing : kubernetes-cni-0.6.0-0.x86_64 28/38
node2:
node2: Installing : libcgroup-0.41-20.el7.x86_64 30/38
node1:
node1: Installing : kubelet-1.13.3-0.x86_64 29/38
node2:
node2: Installing : policycoreutils-python-2.5-29.el7_6.1.x86_64 31/38
node1:
node1: Installing : libcgroup-0.41-20.el7.x86_64 30/38
node2:
node2: Installing : 2:container-selinux-2.74-1.el7.noarch 32/38
node1:
node1: Installing : policycoreutils-python-2.5-29.el7_6.1.x86_64 31/38
node1:
node1: Installing : 2:container-selinux-2.74-1.el7.noarch 32/38
master:
master: Installing : kubelet-1.13.3-0.x86_64 29/38
master:
master: Installing : libcgroup-0.41-20.el7.x86_64 30/38
master:
master: Installing : policycoreutils-python-2.5-29.el7_6.1.x86_64 31/38
master:
master: Installing : 2:container-selinux-2.74-1.el7.noarch 32/38
node2:
node2: Installing : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64 33/38
node1:
node1: Installing : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64 33/38
node2:
node2: Installing : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64 34/38
node1:
node1: Installing : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64 34/38
node2:
node2: Installing : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64 35/38
node1:
node1: Installing : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64 35/38
master:
master: Installing : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64 33/38
master:
master: Installing : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64 34/38
node2:
node2: Installing : kubeadm-1.13.3-0.x86_64 36/38
node2:
node2: Installing : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64 37/38
node2:
node2: Installing : tree-1.6.0-10.el7.x86_64 38/38
node1:
node1: Installing : kubeadm-1.13.3-0.x86_64 36/38
node2:
node2: Verifying : libcgroup-0.41-20.el7.x86_64 1/38
node2:
node2: Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 2/38
node2:
node2: Verifying : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 3/38
node2:
node2: Verifying : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 4/38
node2:
node2: Verifying : cri-tools-1.12.0-0.x86_64 5/38
node2:
node2: Verifying : kubeadm-1.13.3-0.x86_64 6/38
node2:
node2: Verifying : libnetfilter_cthelper-1.0.0-9.el7.x86_64 7/38
node2:
node2: Verifying : 2:container-selinux-2.74-1.el7.noarch 8/38
node2:
node2: Verifying : conntrack-tools-1.4.4-4.el7.x86_64 9/38
node2:
node2: Verifying : python-setuptools-0.9.8-7.el7.noarch 10/38
node2:
node2: Verifying : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64 11/38
node2:
node2: Verifying : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen 12/38
node1:
node1: Installing : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64 37/38
node2:
node2: Verifying : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 13/38
node2:
node2: Verifying : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 14/38
node2:
node2: Verifying : checkpolicy-2.5-8.el7.x86_64 15/38
node2:
node2: Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 16/38
node2:
node2: Verifying : tree-1.6.0-10.el7.x86_64 17/38
node2:
node2: Verifying : python-IPy-0.75-6.el7.noarch 18/38
node1:
node1: Installing : tree-1.6.0-10.el7.x86_64 38/38
node2:
node2: Verifying : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64 19/38
node2:
node2: Verifying : kubelet-1.13.3-0.x86_64 20/38
node2:
node2: Verifying : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_ 21/38
node2:
node2: Verifying : python-pytoml-0.1.14-1.git7dea353.el7.noarch 22/38
node2:
node2: Verifying : setools-libs-3.3.8-4.el7.x86_64 23/38
node2:
node2: Verifying : kubectl-1.13.3-0.x86_64 24/38
node1:
node1: Verifying : libcgroup-0.41-20.el7.x86_64 1/38
node2:
node2: Verifying : policycoreutils-python-2.5-29.el7_6.1.x86_64 25/38
node1:
node1: Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 2/38
node2:
node2: Verifying : libsemanage-python-2.5-14.el7.x86_64 26/38
node2:
node2: Verifying : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 27/38
node1:
node1: Verifying : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 3/38
node2:
node2: Verifying : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64 28/38
node1:
node1: Verifying : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 4/38
node1:
node1: Verifying : cri-tools-1.12.0-0.x86_64 5/38
node2:
node2: Verifying : python-backports-1.0-8.el7.x86_64 29/38
node2:
node2: Verifying : yajl-2.0.4-4.el7.x86_64 30/38
node1:
node1: Verifying : kubeadm-1.13.3-0.x86_64 6/38
node2:
node2: Verifying : audit-libs-python-2.8.4-4.el7.x86_64 31/38
node1:
node1: Verifying : libnetfilter_cthelper-1.0.0-9.el7.x86_64 7/38
node2:
node2: Verifying : libyaml-0.1.4-11.el7_0.x86_64 32/38
node1:
node1: Verifying : 2:container-selinux-2.74-1.el7.noarch 8/38
node2:
node2: Verifying : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6 33/38
node1:
node1: Verifying : conntrack-tools-1.4.4-4.el7.x86_64 9/38
node2:
node2: Verifying : python-ipaddress-1.0.16-2.el7.noarch 34/38
node1:
node1: Verifying : python-setuptools-0.9.8-7.el7.noarch 10/38
node2:
node2: Verifying : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64 35/38
node1:
node1: Verifying : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64 11/38
node2:
node2: Verifying : PyYAML-3.10-11.el7.x86_64 36/38
node1:
node1: Verifying : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen 12/38
node2:
node2: Verifying : kubernetes-cni-0.6.0-0.x86_64 37/38
node1:
node1: Verifying : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 13/38
node2:
node2: Verifying : socat-1.7.3.2-2.el7.x86_64 38/38
node1:
node1: Verifying : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 14/38
node1:
node1: Verifying : checkpolicy-2.5-8.el7.x86_64 15/38
node1:
node1: Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 16/38
node1:
node1: Verifying : tree-1.6.0-10.el7.x86_64 17/38
node1:
node1: Verifying : python-IPy-0.75-6.el7.noarch 18/38
node1:
node1: Verifying : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64 19/38
node1:
node1: Verifying : kubelet-1.13.3-0.x86_64 20/38
node1:
node1: Verifying : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_ 21/38
node1:
node1: Verifying : python-pytoml-0.1.14-1.git7dea353.el7.noarch 22/38
node1:
node1: Verifying : setools-libs-3.3.8-4.el7.x86_64 23/38
node1:
node1: Verifying : kubectl-1.13.3-0.x86_64 24/38
node2:
node2:
node2: Installed:
node2: conntrack-tools.x86_64 0:1.4.4-4.el7
node2: docker.x86_64 2:1.13.1-91.git07f3374.el7.centos
node2: kubeadm.x86_64 0:1.13.3-0
node2: kubelet.x86_64 0:1.13.3-0
node2: screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7
node2: socat.x86_64 0:1.7.3.2-2.el7
node2: tree.x86_64 0:1.6.0-10.el7
node2:
node2: Dependency Installed:
node2: PyYAML.x86_64 0:3.10-11.el7
node2: atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos
node2: audit-libs-python.x86_64 0:2.8.4-4.el7
node2: checkpolicy.x86_64 0:2.5-8.el7
node2: container-selinux.noarch 2:2.74-1.el7
node2: container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7
node2: containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos
node2: cri-tools.x86_64 0:1.12.0-0
node2: docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos
node2: docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos
node2: kubectl.x86_64 0:1.13.3-0
node2: kubernetes-cni.x86_64 0:0.6.0-0
node2: libcgroup.x86_64 0:0.41-20.el7
node2: libnetfilter_cthelper.x86_64 0:1.0.0-9.el7
node2: libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7
node2: libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
node2: libsemanage-python.x86_64 0:2.5-14.el7
node2: libyaml.x86_64 0:0.1.4-11.el7_0
node2: oci-register-machine.x86_64 1:0-6.git2b44233.el7
node2: oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6
node2: oci-umount.x86_64 2:2.3.4-2.git87f9237.el7
node2: policycoreutils-python.x86_64 0:2.5-29.el7_6.1
node2: python-IPy.noarch 0:0.75-6.el7
node2: python-backports.x86_64 0:1.0-8.el7
node2: python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7
node2: python-ipaddress.noarch 0:1.0.16-2.el7
node2: python-pytoml.noarch 0:0.1.14-1.git7dea353.el7
node2: python-setuptools.noarch 0:0.9.8-7.el7
node2: setools-libs.x86_64 0:3.3.8-4.el7
node2: subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos
node2: yajl.x86_64 0:2.0.4-4.el7
node2: Complete!
node1:
node1: Verifying : policycoreutils-python-2.5-29.el7_6.1.x86_64 25/38
node1:
node1: Verifying : libsemanage-python-2.5-14.el7.x86_64 26/38
node1:
node1: Verifying : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 27/38
node1:
node1: Verifying : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64 28/38
node1:
node1: Verifying : python-backports-1.0-8.el7.x86_64 29/38
node2: ++ systemctl enable kubelet
node1:
node1: Verifying : yajl-2.0.4-4.el7.x86_64 30/38
node2: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
node1:
node1: Verifying : audit-libs-python-2.8.4-4.el7.x86_64 31/38
node1:
node1: Verifying : libyaml-0.1.4-11.el7_0.x86_64 32/38
node1:
node1: Verifying : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6 33/38
node1:
node1: Verifying : python-ipaddress-1.0.16-2.el7.noarch 34/38
node1:
node1: Verifying : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64 35/38
node1:
node1: Verifying : PyYAML-3.10-11.el7.x86_64 36/38
node1:
node1: Verifying : kubernetes-cni-0.6.0-0.x86_64 37/38
node2: ++ systemctl start kubelet
node1:
node1: Verifying : socat-1.7.3.2-2.el7.x86_64 38/38
node2: ++ systemctl enable docker
node2: Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
node2: ++ systemctl start docker
node1:
node1:
node1: Installed:
node1: conntrack-tools.x86_64 0:1.4.4-4.el7
node1: docker.x86_64 2:1.13.1-91.git07f3374.el7.centos
node1: kubeadm.x86_64 0:1.13.3-0
node1: kubelet.x86_64 0:1.13.3-0
node1: screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7
node1: socat.x86_64 0:1.7.3.2-2.el7
node1: tree.x86_64 0:1.6.0-10.el7
node1:
node1: Dependency Installed:
node1: PyYAML.x86_64 0:3.10-11.el7
node1: atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos
node1: audit-libs-python.x86_64 0:2.8.4-4.el7
node1: checkpolicy.x86_64 0:2.5-8.el7
node1: container-selinux.noarch 2:2.74-1.el7
node1: container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7
node1: containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos
node1: cri-tools.x86_64 0:1.12.0-0
node1: docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos
node1: docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos
node1: kubectl.x86_64 0:1.13.3-0
node1: kubernetes-cni.x86_64 0:0.6.0-0
node1: libcgroup.x86_64 0:0.41-20.el7
node1: libnetfilter_cthelper.x86_64 0:1.0.0-9.el7
node1: libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7
node1: libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
node1: libsemanage-python.x86_64 0:2.5-14.el7
node1: libyaml.x86_64 0:0.1.4-11.el7_0
node1: oci-register-machine.x86_64 1:0-6.git2b44233.el7
node1: oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6
node1: oci-umount.x86_64 2:2.3.4-2.git87f9237.el7
node1: policycoreutils-python.x86_64 0:2.5-29.el7_6.1
node1: python-IPy.noarch 0:0.75-6.el7
node1: python-backports.x86_64 0:1.0-8.el7
node1: python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7
node1: python-ipaddress.noarch 0:1.0.16-2.el7
node1: python-pytoml.noarch 0:0.1.14-1.git7dea353.el7
node1: python-setuptools.noarch 0:0.9.8-7.el7
node1: setools-libs.x86_64 0:3.3.8-4.el7
node1: subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos
node1: yajl.x86_64 0:2.0.4-4.el7
node1: Complete!
node1: ++ systemctl enable kubelet
node1: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
node1: ++ systemctl start kubelet
node1: ++ systemctl enable docker
node1: Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
node1: ++ systemctl start docker
master:
master: Installing : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64 35/38
==> node2: Running provisioner: shell...
==> node1: Running provisioner: shell...
node2: Running: inline script
node2: Client:
node2: Version: 1.13.1
node2: API version: 1.26
node2: Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
node2: Go version: go1.10.3
node2: Git commit: 07f3374/1.13.1
node2: Built: Wed Feb 13 17:10:12 2019
node2: OS/Arch: linux/amd64
node2:
node2: Server:
node2: Version: 1.13.1
node2: API version: 1.26 (minimum version 1.12)
node2: Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
node2: Go version: go1.10.3
node2: Git commit: 07f3374/1.13.1
node2: Built: Wed Feb 13 17:10:12 2019
node2: OS/Arch: linux/amd64
node2: Experimental: false
node2: kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
node2: Kubernetes v1.13.3
==> node2: Running provisioner: diskandreboot...
Halting vm node2 (0c3820b6-51da-4adb-944e-12af8663ccd8)
node1: Running: inline script
node1: Client:
node1: Version: 1.13.1
node1: API version: 1.26
node1: Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
node1: Go version: go1.10.3
node1: Git commit: 07f3374/1.13.1
node1: Built: Wed Feb 13 17:10:12 2019
node1: OS/Arch: linux/amd64
node1:
node1:
node1: Server:
node1: Version: 1.13.1
node1: API version: 1.26 (minimum version 1.12
node1: )
node1: Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
node1: Go version: go1.10.3
node1: Git commit: 07f3374/1.13.1
node1: Built: Wed Feb 13 17:10:12 2019
node1: OS/Arch: linux/amd64
node1: Experimental: false
node1: kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
node1: Kubernetes v1.13.3
==> node1: Running provisioner: diskandreboot...
Halting vm node1 (37795969-8969-42c3-b16d-a685ad88bad6)
master:
master: Installing : kubeadm-1.13.3-0.x86_64 36/38
master:
master: Installing : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64 37/38
master:
master: Installing : tree-1.6.0-10.el7.x86_64 38/38
master:
master: Verifying : libcgroup-0.41-20.el7.x86_64 1/38
master:
master: Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 2/38
master:
master: Verifying : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 3/38
master:
master: Verifying : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 4/38
master:
master: Verifying : cri-tools-1.12.0-0.x86_64 5/38
master:
master: Verifying : kubeadm-1.13.3-0.x86_64 6/38
master:
master: Verifying : libnetfilter_cthelper-1.0.0-9.el7.x86_64 7/38
master:
master: Verifying : 2:container-selinux-2.74-1.el7.noarch 8/38
master:
master: Verifying : conntrack-tools-1.4.4-4.el7.x86_64 9/38
master:
master: Verifying : python-setuptools-0.9.8-7.el7.noarch 10/38
master:
master: Verifying : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64 11/38
master:
master: Verifying : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen 12/38
master:
master: Verifying : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 13/38
master:
master: Verifying : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 14/38
==> node2: Attempting graceful shutdown of VM...
master:
master: Verifying : checkpolicy-2.5-8.el7.x86_64 15/38
master:
master: Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 16/38
master:
master: Verifying : tree-1.6.0-10.el7.x86_64 17/38
master:
master: Verifying : python-IPy-0.75-6.el7.noarch 18/38
master:
master: Verifying : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64 19/38
master:
master: Verifying : kubelet-1.13.3-0.x86_64 20/38
master:
master: Verifying : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_ 21/38
master:
master: Verifying : python-pytoml-0.1.14-1.git7dea353.el7.noarch 22/38
master:
master: Verifying : setools-libs-3.3.8-4.el7.x86_64 23/38
master:
master: Verifying : kubectl-1.13.3-0.x86_64 24/38
master:
master: Verifying : policycoreutils-python-2.5-29.el7_6.1.x86_64 25/38
master:
master: Verifying : libsemanage-python-2.5-14.el7.x86_64 26/38
master:
master: Verifying : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 27/38
master:
master: Verifying : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64 28/38
master:
master: Verifying : python-backports-1.0-8.el7.x86_64 29/38
master:
master: Verifying : yajl-2.0.4-4.el7.x86_64 30/38
master:
master: Verifying : audit-libs-python-2.8.4-4.el7.x86_64 31/38
master:
master: Verifying : libyaml-0.1.4-11.el7_0.x86_64 32/38
master:
master: Verifying : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6 33/38
==> node1: Attempting graceful shutdown of VM...
master:
master: Verifying : python-ipaddress-1.0.16-2.el7.noarch 34/38
master:
master: Verifying : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64 35/38
master:
master: Verifying : PyYAML-3.10-11.el7.x86_64 36/38
master:
master: Verifying : kubernetes-cni-0.6.0-0.x86_64 37/38
master:
master: Verifying : socat-1.7.3.2-2.el7.x86_64 38/38
master:
master:
master: Installed:
master: conntrack-tools.x86_64 0:1.4.4-4.el7
master: docker.x86_64 2:1.13.1-91.git07f3374.el7.centos
master: kubeadm.x86_64 0:1.13.3-0
master: kubelet.x86_64 0:1.13.3-0
master: screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7
master: socat.x86_64 0:1.7.3.2-2.el7
master: tree.x86_64 0:1.6.0-10.el7
master:
master: Dependency Installed:
master: PyYAML.x86_64 0:3.10-11.el7
master: atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos
master: audit-libs-python.x86_64 0:2.8.4-4.el7
master: checkpolicy.x86_64 0:2.5-8.el7
master: container-selinux.noarch 2:2.74-1.el7
master: container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7
master: containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos
master: cri-tools.x86_64 0:1.12.0-0
master: docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos
master: docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos
master: kubectl.x86_64 0:1.13.3-0
master: kubernetes-cni.x86_64 0:0.6.0-0
master: libcgroup.x86_64 0:0.41-20.el7
master: libnetfilter_cthelper.x86_64 0:1.0.0-9.el7
master: libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7
master: libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
master: libsemanage-python.x86_64 0:2.5-14.el7
master: libyaml.x86_64 0:0.1.4-11.el7_0
master: oci-register-machine.x86_64 1:0-6.git2b44233.el7
master: oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6
master: oci-umount.x86_64 2:2.3.4-2.git87f9237.el7
master: policycoreutils-python.x86_64 0:2.5-29.el7_6.1
master: python-IPy.noarch 0:0.75-6.el7
master: python-backports.x86_64 0:1.0-8.el7
master: python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7
master: python-ipaddress.noarch 0:1.0.16-2.el7
master: python-pytoml.noarch 0:0.1.14-1.git7dea353.el7
master: python-setuptools.noarch 0:0.9.8-7.el7
master: setools-libs.x86_64 0:3.3.8-4.el7
master: subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos
master: yajl.x86_64 0:2.0.4-4.el7
master: Complete!
master: ++ systemctl enable kubelet
master: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
master: ++ systemctl start kubelet
master: ++ systemctl enable docker
master: Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
master: ++ systemctl start docker
==> master: Running provisioner: shell...
master: Running: inline script
master: Client:
master: Version: 1.13.1
master: API version: 1.26
master: Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
master: Go version: go1.10.3
master: Git commit: 07f3374/1.13.1
master: Built: Wed Feb 13 17:10:12 2019
master: OS/Arch: linux/amd64
master:
master:
master: Server:
master: Version: 1.13.1
master: API version: 1.26 (minimum version 1.12)
master: Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
master: Go version: go1.10.3
master: Git commit: 07f3374/1.13.1
master: Built: Wed Feb 13 17:10:12 2019
master: OS/Arch: linux/amd64
master: Experimental: false
master: kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
master: Kubernetes v1.13.3
==> master: Running provisioner: diskandreboot...
Halting vm master (3c95d4ed-547e-4fba-ad48-88234a29e313)
==> master: Attempting graceful shutdown of VM...
Adding storage controller
Added storage controller
Adding disk 1
Creating disk 1 for node1
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: d2d214e6-bdfa-44b3-9d7c-9c90fa3d7af5
Created disk 1 for node1
Added disk 1
Starting vm node1
==> node1: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> node1: Clearing any previously set forwarded ports...
==> node1: Clearing any previously set network interfaces...
==> node1: Preparing network interfaces based on configuration...
node1: Adapter 1: nat
node1: Adapter 2: hostonly
==> node1: Forwarding ports...
node1: 22 (guest) => 2222 (host) (adapter 1)
==> node1: Running 'pre-boot' VM customizations...
==> node1: Booting VM...
==> node1: Waiting for machine to boot. This may take a few minutes...
Adding storage controller
Added storage controller
Adding disk 1
Creating disk 1 for node2
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: fb913b22-4b9b-4dc4-b57b-510543ee41de
Created disk 1 for node2
Added disk 1
Starting vm node2
==> node2: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> node2: Clearing any previously set forwarded ports...
==> node2: Clearing any previously set network interfaces...
==> node2: Preparing network interfaces based on configuration...
node2: Adapter 1: nat
node2: Adapter 2: hostonly
==> node2: Forwarding ports...
node2: 22 (guest) => 2201 (host) (adapter 1)
==> node2: Running 'pre-boot' VM customizations...
==> node2: Booting VM...
==> node2: Waiting for machine to boot. This may take a few minutes...
Adding storage controller
Added storage controller
Adding disk 1
Creating disk 1 for master
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 861d2382-f01a-4e53-8584-d2510fc13812
Created disk 1 for master
Added disk 1
Starting vm master
==> master: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> master: Clearing any previously set forwarded ports...
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
master: Adapter 1: nat
master: Adapter 2: hostonly
==> master: Forwarding ports...
master: 22 (guest) => 2200 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> master: Waiting for machine to boot. This may take a few minutes...
==> node1: Machine booted and ready!
==> node1: Checking for guest additions in VM...
node1: The guest additions on this VM do not match the installed version of
node1: VirtualBox! In most cases this is fine, but in rare cases it can
node1: prevent things such as shared folders from working properly. If you see
node1: shared folder errors, please make sure the guest additions within the
node1: virtual machine match the version of VirtualBox you have installed on
node1: your host and reload your VM.
node1:
node1: Guest Additions Version: 5.1.38
node1: VirtualBox Version: 6.0
==> node1: Setting hostname...
==> node1: Configuring and enabling network interfaces...
==> node1: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-node1/ => /data
==> node1: Machine not provisioned because `--no-provision` is specified.
==> master: Machine booted and ready!
==> master: Checking for guest additions in VM...
master: The guest additions on this VM do not match the installed version of
master: VirtualBox! In most cases this is fine, but in rare cases it can
master: prevent things such as shared folders from working properly. If you see
master: shared folder errors, please make sure the guest additions within the
master: virtual machine match the version of VirtualBox you have installed on
master: your host and reload your VM.
master:
master: Guest Additions Version: 5.1.38
master: VirtualBox Version: 6.0
==> master: Setting hostname...
==> master: Configuring and enabling network interfaces...
==> master: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-master/ => /data
==> node2: Machine booted and ready!
==> node2: Checking for guest additions in VM...
node2: The guest additions on this VM do not match the installed version of
node2: VirtualBox! In most cases this is fine, but in rare cases it can
node2: prevent things such as shared folders from working properly. If you see
node2: shared folder errors, please make sure the guest additions within the
node2: virtual machine match the version of VirtualBox you have installed on
node2: your host and reload your VM.
node2:
node2: Guest Additions Version: 5.1.38
node2: VirtualBox Version: 6.0
==> node2: Setting hostname...
==> master: Machine not provisioned because `--no-provision` is specified.
==> node2: Configuring and enabling network interfaces...
==> node2: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-node2/ => /data
==> node2: Machine not provisioned because `--no-provision` is specified.
==> node1: Running provisioner: shell...
node1: Running: inline script
node1: ++ kubeadm reset -f
node1: [preflight] running pre-flight checks
node1: [reset] no etcd config found. Assuming external etcd
node1: [reset] please manually reset etcd to prevent further issues
node1: [reset] stopping the kubelet service
node1: [reset] unmounting mounted directories in "/var/lib/kubelet"
node1: [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
node1: [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
node1: [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
node1:
node1: The reset process does not reset or clean up iptables rules or IPVS tables.
node1: If you wish to reset iptables, you must do so manually.
node1: For example:
node1: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
node1:
node1: If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
node1: to reset your system's IPVS tables.
node1: ++ retries=5
node1: ++ (( i=0 ))
node1: ++ (( i<retries ))
node1: ++ kubeadm join --ignore-preflight-errors=SystemVerification --discovery-token-unsafe-skip-ca-verification --token ldvls1.07bnzoriqs1ruyrb 192.168.26.10:6443
node1: [preflight] Running pre-flight checks
node1: [discovery] Trying to connect to API Server "192.168.26.10:6443"
node1: [discovery] Created cluster-info discovery client, requesting info from "https://192.168.26.10:6443"
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
==> master: Running provisioner: shell...
master: Running: inline script
master: ++ kubeadm reset -f
master: [preflight] running pre-flight checks
master: [reset] no etcd config found. Assuming external etcd
master: [reset] please manually reset etcd to prevent further issues
master: [reset] stopping the kubelet service
master: [reset] unmounting mounted directories in "/var/lib/kubelet"
master: [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
master: [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
master: [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
master:
master: The reset process does not reset or clean up iptables rules or IPVS tables.
master: If you wish to reset iptables, you must do so manually.
master: For example:
master: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
master:
master: If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
master: to reset your system's IPVS tables.
master: ++ retries=5
master: ++ (( i=0 ))
master: ++ (( i<retries ))
master: ++ kubeadm init --kubernetes-version=1.13.3 --ignore-preflight-errors=SystemVerification --apiserver-advertise-address=192.168.26.10 --pod-network-cidr=10.244.0.0/16 --token ldvls1.07bnzoriqs1ruyrb --token-ttl 0
master: [init] Using Kubernetes version: v1.13.3
master: [preflight] Running pre-flight checks
master: [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
master: [preflight] Pulling images required for setting up a Kubernetes cluster
master: [preflight] This might take a minute or two, depending on the speed of your internet connection
master: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
==> node2: Running provisioner: shell...
node2: Running: inline script
node2: ++ kubeadm reset -f
node2: [preflight] running pre-flight checks
node2: [reset] no etcd config found. Assuming external etcd
node2: [reset] please manually reset etcd to prevent further issues
node2: [reset] stopping the kubelet service
node2: [reset] unmounting mounted directories in "/var/lib/kubelet"
node2: [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
node2: [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
node2: [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
node2:
node2: The reset process does not reset or clean up iptables rules or IPVS tables.
node2: If you wish to reset iptables, you must do so manually.
node2: For example:
node2: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
node2:
node2: If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
node2: to reset your system's IPVS tables.
node2: ++ retries=5
node2: ++ (( i=0 ))
node2: ++ (( i<retries ))
node2: ++ kubeadm join --ignore-preflight-errors=SystemVerification --discovery-token-unsafe-skip-ca-verification --token ldvls1.07bnzoriqs1ruyrb 192.168.26.10:6443
node2: [preflight] Running pre-flight checks
node2: [discovery] Trying to connect to API Server "192.168.26.10:6443"
node2: [discovery] Created cluster-info discovery client, requesting info from "https://192.168.26.10:6443"
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
^[[A^[[A node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
master: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
master: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
master: [kubelet-start] Activating the kubelet service
master: [certs] Using certificateDir folder "/etc/kubernetes/pki"
master: [certs] Generating "front-proxy-ca" certificate and key
master: [certs] Generating "front-proxy-client" certificate and key
master: [certs] Generating "ca" certificate and key
master: [certs] Generating "apiserver-kubelet-client" certificate and key
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
master: [certs] Generating "apiserver" certificate and key
master: [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.26.10]
master: [certs] Generating "etcd/ca" certificate and key
master: [certs] Generating "etcd/server" certificate and key
master: [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.26.10 127.0.0.1 ::1]
master: [certs] Generating "apiserver-etcd-client" certificate and key
master: [certs] Generating "etcd/peer" certificate and key
master: [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.26.10 127.0.0.1 ::1]
master: [certs] Generating "etcd/healthcheck-client" certificate and key
master: [certs] Generating "sa" key and public key
master: [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
master: [kubeconfig] Writing "admin.conf" kubeconfig file
master: [kubeconfig] Writing "kubelet.conf" kubeconfig file
master: [kubeconfig] Writing "controller-manager.conf" kubeconfig file
master: [kubeconfig] Writing "scheduler.conf" kubeconfig file
master: [control-plane] Using manifest folder "/etc/kubernetes/manifests"
master: [control-plane] Creating static Pod manifest for "kube-apiserver"
master: [control-plane] Creating static Pod manifest for "kube-controller-manager"
master: [control-plane] Creating static Pod manifest for "kube-scheduler"
master: [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
master: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
master: [apiclient] All control plane components are healthy after 19.014793 seconds
master: [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
master: [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
master: [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
master: [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
master: [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
master: [bootstrap-token] Using token: ldvls1.07bnzoriqs1ruyrb
master: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
master: [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
master: [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
master: [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
master: [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
master: [addons] Applied essential addon: CoreDNS
master: [addons] Applied essential addon: kube-proxy
master:
master: Your Kubernetes master has initialized successfully!
master:
master: To start using your cluster, you need to run the following as a regular user:
master:
master: mkdir -p $HOME/.kube
master: sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
master: sudo chown $(id -u):$(id -g) $HOME/.kube/config
master:
master: You should now deploy a pod network to the cluster.
master: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
master: https://kubernetes.io/docs/concepts/cluster-administration/addons/
master:
master: You can now join any number of machines by running the following on each node
master: as root:
master:
master: kubeadm join 192.168.26.10:6443 --token ldvls1.07bnzoriqs1ruyrb --discovery-token-ca-cert-hash sha256:e1f3a689f46caece2701c209db5eac272a788b7f87a551a037ce88bfd09d14d3
master: ++ break
master: ++ [[ 5 -eq i ]]
master: ++ KUBELET_EXTRA_ARGS_FILE=/etc/sysconfig/kubelet
master: ++ '[' '!' -f /etc/sysconfig/kubelet ']'
master: ++ grep -q -- --node-ip= /etc/sysconfig/kubelet
master: ++ sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS=--node-ip=192.168.26.10 /' /etc/sysconfig/kubelet
master: ++ systemctl daemon-reload
master: ++ systemctl restart kubelet.service
master: ++ mkdir -p /root/.kube
master: ++ cp -Rf /etc/kubernetes/admin.conf /root/.kube/config
master: +++ id -u
master: +++ id -g
master: ++ chown 0:0 /root/.kube/config
master: ++ '[' flannel == flannel ']'
master: ++ curl --retry 5 --fail -s https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
master: ++ awk '/- --kube-subnet-mgr/{print " - --iface=eth1"}1'
master: ++ kubectl apply -f -
master: podsecuritypolicy.extensions/psp.flannel.unprivileged created
master: clusterrole.rbac.authorization.k8s.io/flannel created
master: clusterrolebinding.rbac.authorization.k8s.io/flannel created
master: serviceaccount/flannel created
master: configmap/kube-flannel-cfg created
master: daemonset.extensions/kube-flannel-ds-amd64 created
master: daemonset.extensions/kube-flannel-ds-arm64 created
master: daemonset.extensions/kube-flannel-ds-arm created
master: daemonset.extensions/kube-flannel-ds-ppc64le created
master: daemonset.extensions/kube-flannel-ds-s390x created
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
^C/opt/vagrant/embedded/gems/2.2.3/gems/concurrent-ruby-1.1.4/lib/concurrent/collection/map/mri_map_backend.rb:18:in `synchronize': can't be called from trap context (ThreadError)
from /opt/vagrant/embedded/gems/2.2.3/gems/concurrent-ruby-1.1.4/lib/concurrent/collection/map/mri_map_backend.rb:18:in `[]='
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:358:in `normalize_key'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:298:in `normalize_keys'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n/backend/simple.rb:84:in `lookup'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n/backend/base.rb:30:in `translate'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:185:in `block in translate'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:181:in `catch'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:181:in `translate'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:59:in `block in run'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `block in fire_callbacks'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `each'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `fire_callbacks'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:33:in `block (2 levels) in register'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:127:in `join'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:127:in `block in run'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:65:in `each'
/opt/vagrant/embedded/gems/2.2.3/gems/concurrent-ruby-1.1.4/lib/concurrent/collection/map/mri_map_backend.rb:18:in `synchronize': from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:65:in `run'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:280:in `block (2 levels) in batch'
can't be called from trap context (ThreadError from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:275:in `tap'
)
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:275:in `block in batch'
from /opt/vagrant/embedded/gems/2.2.3/gems/concurrent-ruby-1.1.4/lib/concurrent/collection/map/mri_map_backend.rb:18:in `[]='
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:274:in `synchronize'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:358:in `normalize_key'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:274:in `batch'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:298:in `normalize_keys'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/commands/up/command.rb:97:in `execute'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n/backend/simple.rb:84:in `lookup'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n/backend/base.rb:30:in `translate'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:185:in `block in translate'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:181:in `catch'
from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:181:in `translate'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/cli.rb:58:in `execute'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:59:in `block in run'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:291:in `cli'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `block in fire_callbacks'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/bin/vagrant:182:in `<main>'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `each'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `fire_callbacks'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:33:in `block (2 levels) in register'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:127:in `join'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:127:in `block in run'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:65:in `each'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:65:in `run'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:280:in `block (2 levels) in batch'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:275:in `tap'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:275:in `block in batch'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:274:in `synchronize'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:274:in `batch'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/commands/up/command.rb:97:in `execute'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/cli.rb:58:in `execute'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:291:in `cli'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/bin/vagrant:182:in `<main>'
make[2]: *** [start-node-1] Error 1
make[2]: *** [start-node-2] Error 1
make[1]: *** [start] Interrupt: 2
make: *** [up] Interrupt: 2
kylix3511.mylabserver.com >>>>