Git Product home page Git Product logo

kubernetes's Introduction

kubernetes

Kubernetes playground

kubernetes's People

Contributors

a-hahn avatar bilal-io avatar boxleafdigital avatar foofoo-2 avatar gustavomr avatar imrajdas avatar jinostrozam avatar jumpojoy avatar justmeandopensource avatar khann-adill avatar kinneyd81 avatar kondanta avatar matthiashertel avatar proop avatar pushp1997 avatar vnagappan-checkit avatar xclud avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes's Issues

issue in "vagrant up" command

kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory
kmaster: [TASK 3] Deploy flannel network
kmaster: The connection to the server localhost:8080 was refused - did you specify the right host or port?
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: failed to load admin kubeconfig: open /root/.kube/config: no such file or directory

[Question] Mac OSX setup

Describe the bug
Deploy was fine, but after adding HAProxy, failed with timeout

How To Reproduce

/usr/local/etc/haproxy/haproxy_mac.cfg:

global
    log 127.0.0.1   local0
    log 127.0.0.1   local1 debug
    #log loghost    local0 info
    maxconn 4096
    chroot /usr/local/share/haproxy
    uid 99
    gid 99
    #daemon
    #debug
    #quiet

defaults
    log global
    mode http
    option httplog
    option dontlognull
    retries 3
    option redispatch
    maxconn 2000
    timeout connect      5000
    timeout client      50000
    timeout server      50000

frontend www
    bind *:80
    mode http
    use_backend be_api if { path_beg /api }
    default_backend webservers

backend webservers
    mode http
    balance roundrobin
    #option forwardfor
    #http-request set-header X-Forwarded-Port %[dst_port]
    #http-request add-header X-Forwarded-Proto https if { ssl_fc }
    #option httpchk HEAD / HTTP/1.1\r\nHost:localhost
    server server1 <IP1>:5000 check    
    server server2 <IP2>:5000 check    

listen kubernetes-apiserver-https
  bind 192.168.86.52:8383
  mode tcp
  option log-health-checks
  timeout client 3h
  timeout server 3h
  server master1 <IP1>:6443 check check-ssl verify none inter 10000
  server master2 <IP2>:6443 check check-ssl verify none inter 10000
  balance roundrobin          

launchctl start co.zip.haproxy_mac

Expected behavior

Screenshots (if any)
Screenshot 2020-12-18 at 08 28 47

Environment (please complete the following information):

$HOME/Library/LaunchAgents/com.example.haproxy.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
 <key>Label</key>
 <string>com.example.haproxy</string>
 <key>ProgramArguments</key>
 <array>
   <string>/usr/local/bin/haproxy</string>
   <string>-db</string>
   <string>-f</string>
   <string>/usr/local/etc/haproxy/haproxy_mac.cfg</string>
 </array>
</dict>
</plist>
**Additional context**
Add any other context about the problem here.

Preserving Source IP in NGINX Ingress Controller

Hi Guys,

I am new to Kubernetes :) I am developing a simple flusk app, that stores client real IP address.
I have made docker image and deployed it in Kubernetes. Then created service. And then Installed kubernetes ingress-nginx.
Everything working fine, except I cant print client real IP address, it always returns pod. I have spent almost 5/6 days and tried all possible solutions from google.

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: monsy-dep
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app: monsy-dep
  replicas: 3
  template:
    metadata:
      labels:
        app: monsy-dep
    spec:
      containers:
      - name: monsy-dep
        image: monsy-image-v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8000

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: monsy-service
  namespace: ingress-nginx
spec:
  selector:
    app: monsy-dep
  ports:
    - port: 80
      protocol: TCP
      targetPort: 8000
  type: NodePort
  externalTrafficPolicy: Local

#ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: monsy-ingress
  namespace: ingress-nginx
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: monsy-service
                port:
                  number: 80

##ingress-nginx config map

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  use-proxy-protocol: "true"

[here I tried so many properties, none worked]

note : main manifst file https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml

kubectl describe ingress monsy-ingress

Name:             monsy-ingress
Namespace:        ingress-nginx
Address:          MY_SERVER_PIBLIC_IP
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /   monsy-service:80 (10.42.0.44:8000,10.42.0.45:8000,10.42.0.46:8000)
Annotations:  kubernetes.io/ingress.class: nginx
              nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    14m (x2 over 14m)  nginx-ingress-controller  Scheduled for sync

kubectl describe service monsy-service

Name:                     monsy-service
Namespace:                ingress-nginx
Labels:                   <none>
Annotations:              <none>
Selector:                 app=monsy-dep
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.215.245
IPs:                      10.43.215.245
Port:                     <unset>  80/TCP
TargetPort:               8000/TCP
NodePort:                 <unset>  32501/TCP
Endpoints:                10.42.0.44:8000,10.42.0.45:8000,10.42.0.46:8000
Session Affinity:         None
External Traffic Policy:  Local
Events:                   <none>

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Describe the bug
Hi i watched your video and your documents to reproduce a single cluster but had this issue: "The connection to the server localhost:8080 was refused - did you specify the right host or port" It will be great if you can give some feedback. Thank you so much!

How To Reproduce
I created 2 VB vms using centos7 and added host-only adapter. then i follow your documents until i found my slave cannot connect to master properly. One thing i noticed is that when i init the kubeadm at master, usually i should expect etcd/peer to be the localhost and etcd/server to be the kmaster but in my case both are the same. I am quite sure how to change this and suspect that might be the cause.

...
[certs] etcd/peer serving cert is signed for DNS names [kmaster.example.com localhost] and IPs [192.168.56.102 127.0.0.1 ::1]
...
[certs] etcd/server serving cert is signed for DNS names [kmaster.example.com localhost] and IPs [192.168.56.102 127.0.0.1 ::1]
...

also, i noticed that on master the api,scheduler, etcd is keep pending.
[root@kmaster kubelet]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-jz5fx 1/1 Running 0 39m
kube-system coredns-86c58d9df4-z9j54 1/1 Running 0 39m
kube-system etcd-localhost.localdomain 0/1 Pending 0 1s
kube-system kube-apiserver-localhost.localdomain 0/1 Pending 0 1s
kube-system kube-flannel-ds-amd64-lstd9 1/1 Running 0 35m
kube-system kube-proxy-vg5n6 1/1 Running 0 39m
kube-system kube-scheduler-localhost.localdomain 0/1 Pending 0 2s

bootstrap fails

$ cat bootstrap-kube.sh | lxc exec kmaster bash
[TASK 1] Install docker container engine
[TASK 2] Enable and start docker service
Failed to get D-Bus connection: No such file or directory
[TASK 3] Add yum repo file for kubernetes
[TASK 4] Install Kubernetes (kubeadm, kubelet and kubectl)
[TASK 5] Enable and start kubelet service
[TASK 6] Install and configure ssh
sed: can't read /etc/ssh/sshd_config: No such file or directory
[TASK 7] Set root password
[TASK 8] Install additional packages
[TASK 9] Initialize Kubernetes Cluster
[TASK 10] Copy kube admin config to root user .kube directory
cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory
[TASK 11] Deploy flannel network
[TASK 12] Generate and save cluster join command to /joincluster.sh

kube-scheduler is Unhealthy

Describe the bug
When bootstrapping the Kubernetes Control Plane Kube-Scheduler service fails:
It seems that the ComponentStatus has been deprecated, any workaround on this?

root@controller-0:~# kubectl get componentstatuses --kubeconfig admin.kubeconfig
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: ``connect: connection refused
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}

Environment (please complete the following information):
I am using kubernetes 1.19.0
kubectl version --client Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

issue

Describe the bug
user@pc:/etc/kubernetes$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

root@pc:~# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

[vagrant@kmaster ~]$ kubectl get node
NAME STATUS ROLES AGE VERSION
kmaster.example.com Ready master 3h2m v1.16.1
kworker1.example.com Ready 159m v1.16.1
[vagrant@kmaster ~]$ kubectl get rc
No resources found in default namespace.
[vagrant@kmaster ~]$ kubectl get pod
No resources found in default namespace.

[vagrant@kmaster ~]$ ping kworker1
PING kworker1.example.com (172.42.42.101) 56(84) bytes of data.
64 bytes from kworker1.example.com (172.42.42.101): icmp_seq=1 ttl=64 time=0.432 ms
64 bytes from kworker1.example.com (172.42.42.101): icmp_seq=2 ttl=64 time=0.495 ms

--- kworker1.example.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.432/0.463/0.495/0.038 ms

[vagrant@kmaster ~]$ ping kworker2
PING kworker2.example.com (172.42.42.102) 56(84) bytes of data.
64 bytes from kworker2.example.com (172.42.42.102): icmp_seq=1 ttl=64 time=0.917 ms
64 bytes from kworker2.example.com (172.42.42.102): icmp_seq=2 ttl=64 time=0.946 ms

--- kworker2.example.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.917/0.931/0.946/0.033 ms

How To Reproduce

Expected behavior

Screenshots (if any)

Environment (please complete the following information):

Additional context
Add any other context about the problem here.

kubeproxy and coredns dont start up

Describe the bug
First thanks so much for the videos and the repo here it has been super helpful.

Basically with your most recent update that sets kubernetes version to 1.14.3 cluster seems to start, but coredns and kubeproxy dont, and therefore flannel will then fail again. I am trying to run this in a top level lxc container, not directly on the host or in a vagrant machine so basically Host->lxc containter-> Kmaster/kworker . Perhaps this is causing the issue?

Also note that the previous version using 1..15.0 also did not work but in that case the issue was something else that i could not debug. Thanks again for the help!

How To Reproduce

Expected behavior

Screenshots (if any)

Screen Shot 2019-06-25 at 5 21 37 PM

Screen Shot 2019-06-25 at 5 22 24 PM

Environment (please complete the following information):

Ubuntu 18.04 host, Ubuntu 18.04 lxc top level container, centos/7 kmaster and kworker

Additional context
Add any other context about the problem here.

[LXD/LXC] No IPv4 using k8s profile

Describe the bug
I am trying to create kubernetes cluster using lxc/lxd machine containers following the video. I used same Vagrantfile to create the virtual machine using ubuntu 18.04. When creating container using default profile it assigns a IPv4 address while using k8s profile it's IPv6 and not IPv4 address.

Expected behavior
Using k8s profile it should assign IPv4 address.

Screenshots (if any)
lxc_profiles
lxc_list

Environment (please complete the following information):
Vagrant environment from: Vagrant file

kubernetes-dashboad 404 page not found

Hi,

I installed the kubernetes dashboard according to your youtube series for Kubernetes (Kube 5 - Install Kubernetes Dashboard UI) on the Vagrant Kubernetes Cluster from this Repository. However I am unable to access the Dashboard and only get a "404 Page not found" Message in my Browser when I try to access https:/kworker1:32332/login. The log for the kubernetes-dashboard shows the following:

$ kubectl logs kubernetes-dashboard-6bcfdf8d9d-6mwbx -n kube-system
2019/03/11 18:48:17 Starting overwatch 2019/03/11 18:48:17 Using in-cluster config to connect to apiserver 2019/03/11 18:48:17 Using service account token for csrf signing 2019/03/11 18:48:17 No request provided. Skipping authorization 2019/03/11 18:48:17 Successful initial request to the apiserver, version: v1.13.4 2019/03/11 18:48:17 Generating JWE encryption key 2019/03/11 18:48:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting 2019/03/11 18:48:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system 2019/03/11 18:48:18 Initializing JWE encryption key from synchronized object 2019/03/11 18:48:18 Creating in-cluster Heapster client 2019/03/11 18:48:18 Auto-generating certificates 2019/03/11 18:48:18 Successful request to heapster 2019/03/11 18:48:18 Successfully created certificates 2019/03/11 18:48:18 Serving securely on HTTPS port: 8443 2019/03/11 18:50:23 http2: server: error reading preface from client 10.244.1.0:41954: remote error: tls: bad certificate 2019/03/11 18:50:23 http2: server: error reading preface from client 10.244.1.0:41956: remote error: tls: bad certificate 2019/03/11 18:51:13 http2: server: error reading preface from client 10.244.2.1:48290: remote error: tls: bad certificate 2019/03/11 18:51:13 http2: server: error reading preface from client 10.244.2.1:48292: remote error: tls: bad certificate
$ kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-86c58d9df4-jp4vr 1/1 Running 0 164m 10.244.0.3 kmaster.example.com <none> <none> coredns-86c58d9df4-nzgt9 1/1 Running 0 164m 10.244.0.2 kmaster.example.com <none> <none> etcd-kmaster.example.com 1/1 Running 0 163m 172.42.42.100 kmaster.example.com <none> <none> heapster-855fc65cd7-kpdmz 1/1 Running 0 3m6s 10.244.1.4 kworker1.example.com <none> <none> kube-apiserver-kmaster.example.com 1/1 Running 0 163m 172.42.42.100 kmaster.example.com <none> <none> kube-controller-manager-kmaster.example.com 1/1 Running 0 163m 172.42.42.100 kmaster.example.com <none> <none> kube-flannel-ds-amd64-8wvph 1/1 Running 0 162m 172.42.42.101 kworker1.example.com <none> <none> kube-flannel-ds-amd64-cmn25 1/1 Running 1 160m 172.42.42.102 kworker2.example.com <none> <none> kube-flannel-ds-amd64-p8bst 1/1 Running 0 164m 172.42.42.100 kmaster.example.com <none> <none> kube-proxy-2bjsd 1/1 Running 0 160m 172.42.42.102 kworker2.example.com <none> <none> kube-proxy-qvsws 1/1 Running 0 162m 172.42.42.101 kworker1.example.com <none> <none> kube-proxy-wsv4m 1/1 Running 0 164m 172.42.42.100 kmaster.example.com <none> <none> kube-scheduler-kmaster.example.com 1/1 Running 0 163m 172.42.42.100 kmaster.example.com <none> <none> kubernetes-dashboard-6bcfdf8d9d-6mwbx 1/1 Running 0 2m38s 10.244.2.5 kworker2.example.com <none> <none> monitoring-influxdb-7db9fd7459-29dht 1/1 Running 0 3m39s 10.244.2.4 kworker2.example.com <none> <none>
Any help is appreciated. Thanks in advance.

Kubernetes in lxc is not starting

Describe the bug
Kubernetes not connecting after restart, how to fix that?

How To Reproduce
Install lxc containers in ubuntu 20 machine, and the lxc image also ubuntu 20.
Restart the host machine(VM where lxc running)

Expected behavior
Should work correctly but not starting

Environment (please complete the following information):

osboxes@osboxes:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.10
Release:        20.10
Codename:       groovy

Additional context
Add any other context about the problem here.
I am running kubernetes on lxc containers for testing the services. I deployed it using these steps.

It worked fine but after restart of the ubuntu machine(host of the lxc containers). I am getting this error in ubuntu machine and also on the kubernetes master.

root@kmaster:~# kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 10.199.242.17:6443 was refused - did you specify the right host or port?

Any suggestion on how to fix that? I tried multiple steps provided in many posts like restarting the kubelet, copying the config file again. Restarting the containerd service. running swap off command. Someone suggested to check docker service but the kubernetes 1.21 is using containerd.

journalctl -xeu kubelet

--
-- The job identifier is 4815 and the job result is done.
Jul 14 13:04:52 kmaster systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: A start job for unit kubelet.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit kubelet.service has finished successfully.
--
-- The job identifier is 4815.
Jul 14 13:04:52 kmaster kubelet[1903]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/>
Jul 14 13:04:52 kmaster kubelet[1903]: I0714 13:04:52.555232    1903 server.go:197] "Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remot>
Jul 14 13:04:52 kmaster kubelet[1903]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/>
Jul 14 13:04:52 kmaster kubelet[1903]: I0714 13:04:52.565056    1903 server.go:440] "Kubelet version" kubeletVersion="v1.21.0"
Jul 14 13:04:52 kmaster kubelet[1903]: I0714 13:04:52.565288    1903 server.go:851] "Client rotation is on, will bootstrap in background"
Jul 14 13:04:52 kmaster kubelet[1903]: I0714 13:04:52.566362    1903 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 14 13:04:52 kmaster kubelet[1903]: I0714 13:04:52.567322    1903 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.574623    1903 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575134    1903 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575272    1903 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName:>
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575392    1903 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575459    1903 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575523    1903 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575671    1903 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/run/containerd/c>
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575764    1903 remote_runtime.go:62] parsed scheme: ""
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575822    1903 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575916    1903 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.575979    1903 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.576062    1903 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/run/containerd/c>
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.576138    1903 remote_image.go:50] parsed scheme: ""
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.576199    1903 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 14 13:04:57 kmaster kubelet[1903]: I0714 13:04:57.576259    1903 passthrough.go:4

503 Service Unavailable

Hi.
Following your video tutorial on youtube ([ Kube 31 ] Set up Nginx Ingress in Kubernetes Bare Metal), I have followed all the steps:

  1. Haproxy
  2. ingress controller
  3. Ingress resources
  4. Service
  5. Deployment
  6. Added Haporxy IP address to /etc/hosts/

When I input http://nginx.example.com/ into my browser, I get error 503 Service Unavailable - No server is available to handle this request.

Could you help me out?
Thanks!

Unable to start the pod when using daemonset

Hi All,
I am using the configuration provided by you for nginx-ingress. After creating daemonsets, I am getting below error:

W1109 06:15:33.387442 1 main.go:284] The '-use-ingress-class-only' flag will be deprecated and has no effect on versions of kubernetes >= 1.18.0. Processing ONLY resources that have the 'ingressClassName' field in Ingress equal to the class.
F1109 06:15:33.390807 1 main.go:288] Error when getting IngressClass nginx: ingressclasses.networking.k8s.io "nginx" is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot get resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope

Vagrant is using ip's in public space. 172.42.42.10[0-2]

The ip's 172.42.42.10[0-2] that is used within the Vagrant file are actually public ip's that resolve to Tmobile.

$ whois 172.42.42.100 |grep OrgName
OrgName: T-Mobile USA, Inc.

To reproduce, just run vagrant up with the default vagrant file.

Expected behavior. I would have expected something in the 172.16-32.0.0 network space. This is reserved for private networks.

Nginx ingress controller not working with namespace

Describe the bug
Following the tutorial at https://www.youtube.com/watch?v=2VUQ4WjLxDg everything works as expected. However, if we add namespace to the deployments we get '503 Service Temporarily Unavailable' on all paths: '/', '/green', '/blue'

How To Reproduce
create new namespace:

kubectl create namespace example

install nginx-ingress:

helm install my-conf-nginx --namespace example  stable/nginx-ingress

create resources:

kubectl create -f ingress-resource-3.yaml -f nginx-deploy-blue.yaml -f nginx-deploy-green.yaml -f nginx-deploy-main.yaml

Resources:
ingress-resource-3.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  namespace: example
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress-resource-3
spec:
  rules:
  - host: fry.lab.uvalight.net
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-deploy-main
          servicePort: 80  
      - path: /blue
        backend:
          serviceName: nginx-deploy-blue
          servicePort: 80
      - path: /green
        backend:
          serviceName: nginx-deploy-green
          servicePort: 80

nginx-deploy-blue.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: example
  labels:
    run: nginx
  name: nginx-deploy-blue
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx-blue
  template:
    metadata:
      labels:
        run: nginx-blue
    spec:
      volumes:
      - name: webdata
        emptyDir: {}
      initContainers:
      - name: web-content
        image: busybox
        volumeMounts:
        - name: webdata
          mountPath: "/webdata"
        command: ["/bin/sh", "-c", 'echo "<h1>I am <font color=blue>BLUE</font></h1>" > /webdata/index.html']
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: webdata
          mountPath: "/usr/share/nginx/html"

nginx-deploy-green.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: example
  labels:
    run: nginx
  name: nginx-deploy-green
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx-green
  template:
    metadata:
      labels:
        run: nginx-green
    spec:
      volumes:
      - name: webdata
        emptyDir: {}
      initContainers:
      - name: web-content
        image: busybox
        volumeMounts:
        - name: webdata
          mountPath: "/webdata"
        command: ["/bin/sh", "-c", 'echo "<h1>I am <font color=green>GREEN</font></h1>" > /webdata/index.html']
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: webdata
          mountPath: "/usr/share/nginx/html"

nginx-deploy-main.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: example
  labels:
    run: nginx
  name: nginx-deploy-main
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx-main
  template:
    metadata:
      labels:
        run: nginx-main
    spec:
      containers:
      - image: nginx
        name: nginx

Expose ports:

kubectl expose deploy nginx-deploy-blue --port 80 -n example
kubectl expose deploy nginx-deploy-green --port 80 -n example
kubectl expose deploy nginx-deploy-main --port 80 -n example

Expected behavior
For each path on the browser to show the corresponding nginx index.html

Screenshots (if any)

Environment (please complete the following information):

kubectl version --short
Client Version: v1.18.2
Server Version: v1.18.2
helm version --short
v3.2.0+ge11b7ce

Additional context
Add any other context about the problem here.

Traefik Ingress Contoller gives error connection timed out

Hi i have been following yours series on traefik ingress controller and in second video when you deploy ingress route and check it on loadbalancer ip for you it works but for me i am getting error connection timed out every time.

I have deployed on virtualbox. I my cluster i have 1 master and 2 worker nodes. I used helm deployment and followed all the steps as you suggested.
I have added the loadbalancer ip in hosts file
kubernetes version: - 1.22.2
helm version: 3.7.0
traefik version: 2.5.3

Screenshots (if any)

Screenshot (47)
Screenshot (51)
Screenshot (48)

pod defination
Screenshot (49)

As i am new to kuberenetes can you please help me understand where i am doing it wrong!

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Describe the bug
When I run:

$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Same error when I run it with sudo as well:

$ sudo kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Environment (please complete the following information):
Ubuntu 20.04

Additional context
After watching the youtube video, I got to know that the I don’t have a .kube directory with the right config in it.
This step is missing in kubernetes/docs/install-cluster-ubuntu-20.md
Resolution

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

"Peer's Certificate issuer is not recognized."

Describe the bug
since I am behind our corporate vpn, the access to internet seems to be restricted from within vagrant , unless I install our corporate ssl cert

How To Reproduce

vagrant up
Expected behavior

Screenshots (if any)
kmaster: [TASK 4] Disable SELinux
kmaster: [TASK 5] Stop and Disable firewalld
kmaster: [TASK 6] Add sysctl settings
kmaster: [TASK 7] Disable and turn off SWAP
kmaster: [TASK 8] Add yum repo file for kubernetes
kmaster: [TASK 9] Install Kubernetes (kubeadm, kubelet and kubectl)
kmaster: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#60 - "Peer's Certificate issuer is not recognized."
kmaster: Trying other mirror.
kmaster: It was impossible to connect to the CentOS servers.
kmaster: This could mean a connectivity issue in your environment, such as the requirement to configure a proxy,
kmaster: or a transparent proxy that tampers with TLS security, or an incorrect system clock.
kmaster: You can try to solve this issue by using the instructions on https://wiki.centos.org/yum-errors
kmaster: If above article doesn't help to resolve this issue please use https://bugs.centos.org/.
kmaster:
kmaster:
kmaster: One of the configured repositories failed (Kubernetes),
kmaster: and yum doesn't have enough cached data to continue. At this point the only
kmaster: safe thing yum can do is fail. There are a few ways to work "fix" this:
kmaster:
kmaster: 1. Contact the upstream for the repository and get them to fix the problem.
kmaster:
kmaster: 2. Reconfigure the baseurl/etc. for the repository, to point to a working
kmaster: upstream. This is most often useful if you are using a newer
kmaster: distribution release than is supported by the repository (and the
kmaster: packages for the previous distribution release still work).
kmaster:
kmaster: 3. Run the command with the repository temporarily disabled
kmaster: yum --disablerepo=kubernetes ...
kmaster:
kmaster: 4. Disable the repository permanently, so yum won't use it by default. Yum
kmaster: will then just ignore the repository until you permanently enable it
kmaster: again or use --enablerepo for temporary usage:
kmaster:
kmaster: yum-config-manager --disable kubernetes
kmaster: or
kmaster: subscription-manager repos --disable=kubernetes
kmaster:
kmaster: 5. Configure the failing repository to be skipped, if it is unavailable.
kmaster: Note that yum will try to contact the repo. when it runs most commands,
kmaster: so will have to try and fail each time (and thus. yum will be be much
kmaster: slower). If it is a very temporary problem though, this is often a nice
kmaster: compromise:
kmaster:
kmaster: yum-config-manager --save --setopt=kubernetes.skip_if_unavailable=true
kmaster:
kmaster: failure: repodata/repomd.xml from kubernetes: [Errno 256] No more mirrors to try.
kmaster: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#60 - "Peer's Certificate issuer is not recognized."
kmaster: [TASK 10] Enable and start kubelet service
kmaster: Failed to execute operation: No such file or directory
kmaster: Failed to start kubelet.service: Unit not found.
kmaster: [TASK 11] Enable ssh password authentication
kmaster: [TASK 12] Set root password
kmaster: Changing password for user root.
kmaster: passwd: all authentication tokens updated successfully.
==> kmaster: Running provisioner: shell...
kmaster: Running: /var/folders/jr/kc1rdmj10jb4p1hrw77zttq00000gn/T/vagrant-shell20190427-41761-wmvcx4.sh
kmaster: [TASK 1] Initialize Kubernetes Cluster
kmaster: /tmp/vagrant-shell: line 5: kubeadm: command not found
kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory
kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory
kmaster: [TASK 3] Deploy flannel network
kmaster: -bash: kubectl: command not found
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

K8s on LXC using Calico

Hello,
Can you suggest what changes are required to lxc bootstrap files so that it can use Calico ?

Vagrant Provisioning not working.

Describe the bug
Started seeing issues with vagrant provisioning these days, when we run vagrant up after a while host losing the internet connection and eventually provisioning fails. This is happening everytime.

How To Reproduce
Software used:
virtualbox 6.1.4-2
community/vagrant 2.2.7-2 [installed]
OS : 4.19.108-1-MANJARO

Expected behavior
Successful cluster provisioning.

Screenshots (if any)

Environment (please complete the following information):

Additional context
Add any other context about the problem here.

Vagrant provision error

Describe the bug
I tried the vagrant provision for local testing and got the following error:
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: W1222 03:17:25.649380 8679 validation.go:28] Cannot validate kube-proxy config - no validator is available
kmaster: W1222 03:17:25.649424 8679 validation.go:28] Cannot validate kubelet config - no validator is available

How To Reproduce
simply navigate to vagrant-provision folder and vagrant up

Expected behavior
There's shouldn't be any error

Environment (please complete the following information):
MacOS Catalina
Vagrant 2.2.4
Vbox 6.0.14

Flanel does not starts up after Container reboot

Describe the bug
Using the bootstrap code to setup k8 cluster on ubuntu
https://github.com/justmeandopensource/kubernetes/tree/master/lxd-provisioning
It comes up successfully.

But when i restart the cluster using lxc- the flanel network fails and does not come up

How To Reproduce
./kubelx stop
./kubelx start

Expected behavior
Flanel network should come back online upon container cluster restart

Additional context
I am trying to stop containers to gracefully shutdown my base ubuntu machine, so i can come power it online when i need to practice. Avoiding re-provisioning of the cluster.

I think Node cannot connect to each others

Hello,

I just searched this video from Google, this is a great video, but I don't know why mongo-1 is PRIMARY and mongo-2 is STARTUP and mongo-0 not reachable/healthy, I think the connection having a problem.

   "members" : [
            {
                    "_id" : 0,
                    "name" : "mongo-1:27017",
                    "health" : 1,
                    "state" : 1,
                    "stateStr" : "PRIMARY",
                    "uptime" : 4712,
                    "optime" : {
                            "ts" : Timestamp(1595500691, 1),
                            "t" : NumberLong(1)
                    },
                    "optimeDate" : ISODate("2020-07-23T10:38:11Z"),
                    "syncingTo" : "",
                    "syncSourceHost" : "",
                    "syncSourceId" : -1,
                    "infoMessage" : "",
                    "electionTime" : Timestamp(1595496821, 2),
                    "electionDate" : ISODate("2020-07-23T09:33:41Z"),
                    "configVersion" : 3,
                    "self" : true,
                    "lastHeartbeatMessage" : ""
            },
            {
                    "_id" : 1,
                    "name" : "mongo-2.mongo:27017",
                    "health" : 1,
                    "state" : 0,
                    "stateStr" : "STARTUP",
                    "uptime" : 860,
                    "optime" : {
                            "ts" : Timestamp(0, 0),
                            "t" : NumberLong(-1)
                    },
                    "optimeDurable" : {
                            "ts" : Timestamp(0, 0),
                            "t" : NumberLong(-1)
                    },
                    "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                    "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
                    "lastHeartbeat" : ISODate("2020-07-23T10:38:12.018Z"),
                    "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
                    "pingMs" : NumberLong(0),
                    "lastHeartbeatMessage" : "",
                    "syncingTo" : "",
                    "syncSourceHost" : "",
                    "syncSourceId" : -1,
                    "infoMessage" : "",
                    "configVersion" : -2
            },
            {
                    "_id" : 2,
                    "name" : "mongo-0.mongo:27017",
                    "health" : 0,
                    "state" : 8,
                    "stateStr" : "(not reachable/healthy)",
                    "uptime" : 0,
                    "optime" : {
                            "ts" : Timestamp(0, 0),
                            "t" : NumberLong(-1)
                    },
                    "optimeDurable" : {
                            "ts" : Timestamp(0, 0),
                            "t" : NumberLong(-1)
                    },
                    "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                    "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
                    "lastHeartbeat" : ISODate("2020-07-23T10:38:12.828Z"),
                    "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
                    "pingMs" : NumberLong(0),
                    "lastHeartbeatMessage" : "replica set IDs do not match, ours: 5f1959757d0b2b30d6617970; remote node's: 5f1958ac719b97abf068bd4c",
                    "syncingTo" : "",
                    "syncSourceHost" : "",
                    "syncSourceId" : -1,
                    "infoMessage" : "",
                    "configVersion" : -1
            }
    ],
    "ok" : 1,
    "$clusterTime" : {
            "clusterTime" : Timestamp(1595500691, 1),
            "signature" : {
                    "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                    "keyId" : NumberLong(0)
            }
    },
    "operationTime" : Timestamp(1595500691, 1)

You can see three mongo POD is runnning:

[root@sealos01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mongo-0 1/1 Running 0 83m
mongo-1 1/1 Running 0 82m
mongo-2 1/1 Running 0 82m

And mongo service is here too:

[root@sealos01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helloworld-v1 NodePort 10.100.78.146 80:30303/TCP 17d
kubernetes ClusterIP 10.96.0.1 443/TCP 44d
mongo ClusterIP None 27017/TCP 79m

Thanks

LXC/LXD - Kubernetes fails to load

I have an issue loading Kubernetes on LXD environment.
my host machine is Ubuntu 20.04
failing Kubernetes component in lxd environment.

Could you help me to fix this?
`
$ lxc profile list
+---------+----------------------------+---------+
| NAME | DESCRIPTION | USED BY |
+---------+----------------------------+---------+
| default | Default LXD profile | 0 |
+---------+----------------------------+---------+
| k8s | LXD profile for Kubernetes | 3 |
+---------+----------------------------+---------+
$ lxc profile show k8s
config:
limits.cpu: "2"
limits.memory: 2GB
limits.memory.swap: "false"
linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
sys:rw"
security.nesting: "true"
security.privileged: "true"
description: LXD profile for Kubernetes
devices:
enp179s0f0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: k8s
used_by:

  • /1.0/instances/kmaster
  • /1.0/instances/kworker1
  • /1.0/instances/kworker2
    $

$ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:26:10:33:a3 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp179s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.232.16.52 netmask 255.255.255.0 broadcast 10.232.16.255
inet6 fe80::735:e4df:66ac:8c7a prefixlen 64 scopeid 0x20
ether 0c:9d:92:20:47:d5 txqueuelen 1000 (Ethernet)
RX packets 5602060 bytes 6567487432 (6.5 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1029260 bytes 84934009 (84.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp179s0f1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 0c:9d:92:20:47:d6 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 18087 bytes 1655238 (1.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18087 bytes 1655238 (1.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lxdbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.180.46.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::216:3eff:feb8:3230 prefixlen 64 scopeid 0x20
ether 00:16:3e:b8:32:30 txqueuelen 1000 (Ethernet)
RX packets 536636 bytes 34308840 (34.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 916693 bytes 4944232652 (4.9 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth8cf35fe0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether f6:ff:6a:c3:bd:97 txqueuelen 1000 (Ethernet)
RX packets 39639 bytes 5072701 (5.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 66630 bytes 380217792 (380.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth915b923b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 4a:68:02:9d:d7:33 txqueuelen 1000 (Ethernet)
RX packets 22021 bytes 1867814 (1.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 37891 bytes 192371371 (192.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethb6f7566a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 7e:db:0d:b3:da:a1 txqueuelen 1000 (Ethernet)
RX packets 21477 bytes 1672949 (1.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 35168 bytes 191387372 (191.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

$

$./kubelx provision

Provisioning Kubernetes Cluster...

==> Bringing up kmaster
Creating kmaster
Starting kmaster
==> Running provisioner script
[TASK 1] Install containerd runtime
[TASK 2] Add apt repo for kubernetes
[TASK 3] Install Kubernetes components (kubeadm, kubelet and kubectl)
[TASK 4] Enable ssh password authentication
[TASK 5] Set root password
[TASK 6] Install additional packages
[TASK 7] Pull required containers
[TASK 8] Initialize Kubernetes Cluster
[TASK 9] Copy kube admin config to root user .kube directory
[TASK 10] Deploy Flannel network
[TASK 11] Generate and save cluster join command to /joincluster.sh

==> Bringing up kworker1
Creating kworker1
Starting kworker1
==> Running provisioner script
[TASK 1] Install containerd runtime
[TASK 2] Add apt repo for kubernetes
[TASK 3] Install Kubernetes components (kubeadm, kubelet and kubectl)
[TASK 4] Enable ssh password authentication
[TASK 5] Set root password
[TASK 6] Install additional packages
[TASK 7] Join node to Kubernetes Cluster

==> Bringing up kworker2
Creating kworker2
Starting kworker2
==> Running provisioner script
[TASK 1] Install containerd runtime
[TASK 2] Add apt repo for kubernetes
[TASK 3] Install Kubernetes components (kubeadm, kubelet and kubectl)
[TASK 4] Enable ssh password authentication
[TASK 5] Set root password
[TASK 6] Install additional packages
[TASK 7] Join node to Kubernetes Cluster

$ lxc list
+----------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------+---------+----------------------+------+-----------+-----------+
| kmaster | RUNNING | 10.180.46.7 (eth0) | | CONTAINER | 0 |
+----------+---------+----------------------+------+-----------+-----------+
| kworker1 | RUNNING | 10.180.46.11 (eth0) | | CONTAINER | 0 |
+----------+---------+----------------------+------+-----------+-----------+
| kworker2 | RUNNING | 10.180.46.216 (eth0) | | CONTAINER | 0 |
+----------+---------+----------------------+------+-----------+-----------+
$ lxc exec kmaster bash
root@kmaster:# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready control-plane,master 6m30s v1.22.0
kworker1 Ready 5m24s v1.22.0
kworker2 Ready 4m20s v1.22.0
root@kmaster:
# kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-78fcd69978-4wm27 0/1 ContainerCreating 0 6m19s
pod/coredns-78fcd69978-vr85t 0/1 ContainerCreating 0 6m19s
pod/etcd-kmaster 1/1 Running 0 6m35s
pod/kube-apiserver-kmaster 1/1 Running 0 6m34s
pod/kube-controller-manager-kmaster 1/1 Running 0 6m28s
pod/kube-flannel-ds-gcngr 0/1 CrashLoopBackOff 4 (17s ago) 4m27s
pod/kube-flannel-ds-mg6gn 0/1 CrashLoopBackOff 5 (20s ago) 6m19s
pod/kube-flannel-ds-r68mp 0/1 CrashLoopBackOff 4 (75s ago) 5m31s
pod/kube-proxy-72r4d 0/1 CrashLoopBackOff 5 (78s ago) 4m27s
pod/kube-proxy-hkf2c 0/1 CrashLoopBackOff 5 (2m32s ago) 5m31s
pod/kube-proxy-qd9vg 0/1 CrashLoopBackOff 6 (41s ago) 6m19s
pod/kube-scheduler-kmaster 1/1 Running 0 6m28s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 6m34s

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds 3 3 0 3 0 6m33s
daemonset.apps/kube-proxy 3 3 0 3 0 kubernetes.io/os=linux 6m34s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 0/2 2 0 6m34s

NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-78fcd69978 2 2 0 6m19s
root@kmaster:~#`

docker based cluster fails with health check on https://127.0.0.1:10259/healthz

Hi All,
I was able to create perfectly healthy cluster in the past.
Recent attempts failed though, visible if you run "kubectl get componentstatus"

[vagrant@kmaster ~]$ kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
[vagrant@kmaster ~]$

I tried to modify manifest file by replacing --bind-address from 127.0.0.1 to 172.16.16.100 and "describe" does show correct urls for checks but "componentstatus" side still shows 127... IP.
Am I doing anything wrong?

Below is the describe on the component, as seen the url now points to 172.16.16.100 but the componentstatus still returns url of 127.0.0.1

 Requests:
  cpu:        100m
Liveness:     http-get https://172.16.16.100:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
Startup:      http-get https://172.16.16.100:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
Environment:  <none>
Mounts:
  /etc/kubernetes/scheduler.conf from kubeconfig (ro)

Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/scheduler.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors:
Tolerations: :NoExecute op=Exists
Events:
Type Reason Age From Message


Normal Pulled 9m22s kubelet Container image "k8s.gcr.io/kube-scheduler:v1.20.1" already present on machine
Normal Created 9m22s kubelet Created container kube-scheduler
Normal Started 9m22s kubelet Started container kube-scheduler
Warning FailedCreatePodSandBox 9m22s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-kmaster.example.com": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-kmaster.example.com_kube-system_6085d2b9fd8b0ad13cdee3fce30ee195_0" is already in use by container "07b1511d57b84d6b74802576cb157ce8b69cdc7fe15efd9235bafa629cac18be". You have to remove (or rename) that container to be able to reuse that name.
Normal SandboxChanged 5m28s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 5m28s kubelet Container image "k8s.gcr.io/kube-scheduler:v1.20.1" already present on machine
Normal Created 5m28s kubelet Created container kube-scheduler
Normal Started 5m28s kubelet Started container kube-scheduler
[vagrant@kmaster ~]$

etcd cluster issue "error "tls: first record does not look like a TLS handshake"

HI guy,
I have follow your document guide just changed IP address to my environment.
when I try to start etcd cluster got error as below:

Apr 07 09:35:10 master1 etcd[4769]: rejected connection from "x.x.x.x:41834" (error "tls: first record does not look like a TLS handshake", ServerName "")
Apr 07 09:35:03 master2 etcd[4758]: rejected connection from "x.x.x.x:37848" (error "tls: first record does not look like a TLS handshake", ServerName "")
Apr 07 09:34:42 master3 etcd[4814]: rejected connection from "x.x.x.x:58962" (error "tls: first record does not look like a TLS handshake", ServerName "")

Thanks,
Soklang

k8s vagrant

Hi,

i am using this doc https://github.com/justmeandopensource/kubernetes/blob/master/docs/install-cluster-ubuntu-20.md to create a master and a worker node on vagrant.

there are these errors in coredns logs

root@kmaster:/home/vagrant# kubectl logs -nkube-system coredns-74ff55c5b-d6cnd
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:46829->10.0.2.3:53: i/o timeout
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:58568->10.0.2.3:53: i/o timeout
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:55245->10.0.2.3:53: i/o timeout
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:37120->10.0.2.3:53: i/o timeout
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:47361->10.0.2.3:53: i/o timeout
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:40080->10.0.2.3:53: i/o timeout
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:44235->10.0.2.3:53: i/o timeout
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:57478->10.0.2.3:53: i/o timeout
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:47970->10.0.2.3:53: i/o timeout
[ERROR] plugin/errors: 2 459467323915957403.8089760168811078997. HINFO: read udp 192.168.189.2:44119->10.0.2.3:53: i/o timeout

below is IPs on vagrant master

root@kmaster:/home/vagrant# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:be:4a:e8 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85842sec preferred_lft 85842sec
inet6 fe80::a00:27ff:febe:4ae8/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:7d:26:99 brd ff:ff:ff:ff:ff:ff
inet 172.16.16.100/24 brd 172.16.16.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe7d:2699/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e7:73:2a:ea brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: cali6622e3eaaca@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
6: calid5f5dd35993@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
7: calia37defa7759@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
8: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.189.0/32 brd 192.168.189.0 scope global tunl0
valid_lft forever preferred_lft forever

root@kmaster:/vagrant/my-nginx/nginx/src/service# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mynginxinsta-76875f7b76-ssbr8 1/1 Running 0 84s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 25m
service/mynginx-service ClusterIP 10.103.32.42 80/TCP 67s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mynginxinsta 1/1 1 1 84s

NAME DESIRED CURRENT READY AGE
replicaset.apps/mynginxinsta-76875f7b76 1 1 1 84s

what d u think the errors in dns logs?

thanks alot

Error "The Role "kubernetes-dashboard-minimal" is invalid:"

Describe the bug
Error when using the dashboard.yaml

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
The Role "kubernetes-dashboard-minimal" is invalid:

  • rules[0].apiGroups: Required value: resource rules must supply at least one api group
  • rules[1].apiGroups: Required value: resource rules must supply at least one api group
  • rules[2].apiGroups: Required value: resource rules must supply at least one api group
  • rules[3].apiGroups: Required value: resource rules must supply at least one api group
  • rules[4].apiGroups: Required value: resource rules must supply at least one api group
  • rules[5].apiGroups: Required value: resource rules must supply at least one api group

How To Reproduce
kubectl create -f dashboard.yaml

Expected behavior

role should be created
Screenshots (if any)

Environment (please complete the following information):

Additional context
Add any other context about the problem here.

SOLUTION:
This stops the error,
Replace all instances of

  • apiGroups: [""]
    with
  • apiGroups: ['']

nfs provisioner

Describe the bug
error: error validating "deployment.yaml": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to validate=false

after you convert (kubectl convert -f

error: error validating "g.yaml": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false

How To Reproduce
replay kubectl apply -f deployment.yaml and then re-run I get:

error: error validating "g.yaml": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false

I have fixed the indentation however pod does not start up:

jenkins-59f6959d98-vk99q 0/1 Pending 0 13m

Describing the POD:

k describe pod jenkins-59f6959d98-vk99q
Name: jenkins-59f6959d98-vk99q
Namespace: default
Priority: 0
Node:
Labels: app.kubernetes.io/component=jenkins-master
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=jenkins
helm.sh/chart=jenkins-1.7.6
pod-template-hash=59f6959d98
Annotations: checksum/config: 4f2f321c0bce97159b5d3f0429c6a2a371f97057223c493684bfb33623756550
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/jenkins-59f6959d98
Init Containers:
copy-default-config:
Image: jenkins/jenkins:lts
Port:
Host Port:
Command:
sh
/var/jenkins_config/apply_config.sh
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment:
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'jenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'jenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins from plugins (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-s2ftg (ro)
Containers:
jenkins:
Image: jenkins/jenkins:lts
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
--argumentsRealm.roles.$(ADMIN_USER)=admin
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
JAVA_OPTS:
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'jenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'jenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-s2ftg (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins
Optional: false
secrets-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: jenkins
ReadOnly: false
jenkins-token-s2ftg:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-token-s2ftg
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Warning FailedScheduling default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)

Expected behavior
Deployed NFS Provisioner

lxd driven provisioning k8s with kubespray

hi venkat,

first of all thanks for the great playground repo - it inspires me a lot! :)

i tried to install k8s with kubespray on lxc container, and i got some trouble with it.
i used the profile, but i have to disable the fail-on-swap check with an ignore_erros flag. kubespray will fail on the swap check. see diff on kubespray repo:

diff --git a/roles/kubernetes/preinstall/tasks/0010-swapoff.yml b/roles/kubernetes/preinstall/tasks/0010-swapoff.yml
index 99587ac0..5174a4b8 100644
--- a/roles/kubernetes/preinstall/tasks/0010-swapoff.yml
+++ b/roles/kubernetes/preinstall/tasks/0010-swapoff.yml
@@ -16,3 +16,4 @@
 - name: Disable swap
   command: /sbin/swapoff -a
   when: swapon.stdout
+  ignore_errors: yes
\ No newline at end of file
(END)

i wonder why swap is always on with the given k8s-profile from here:

https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/lxd-provisioning/k8s-profile-config

the free command show swap is on:

lxc exec kmaster -- free -m
              total        used        free      shared  buff/cache   available
Mem:           1907         120         917           8         869        1787
Swap:         16335          53       16282

k8s-profile:

config:
  limits.cpu: "2"
  limits.memory: 2GB
  limits.memory.swap: "false"
  linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh
  raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
    sys:rw"
  security.nesting: "true"
  security.privileged: "true"
description: LXD profile for Kubernetes
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: k8s-profile
used_by:
- /1.0/containers/ubuntu1804
- /1.0/containers/kworker1
- /1.0/containers/kmaster

in addition to that i had to add some kernel_module to the k8s-profil (ip_vs)

furthermore i wonder about this line 7days ago ....
what is that for?

3f55019#diff-b6b693b8386e0f20c807566b467f894fR53
kind regards!
matthias

CRIO: Permission Denied issue while pinging IP of another k8s node

While trying to ping another node IP getting ping: permission denied (are you root?)

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/ping 1/1 Running 0 5m39s 10.244.83.75 ip-172-31-28-9
kube-system pod/calico-kube-controllers-69496d8b75-xdkkm 1/1 Running 0 55m 10.244.189.2 kmaster

$ kubectl run -it --rm shell --image busybox
If you don't see a command prompt, try pressing enter.
/ # ping 10.244.189.2
PING 10.244.189.2 (10.244.189.2): 56 data bytes
ping: permission denied (are you root?)

But using the below manifest file, we are good to ping IP of any nodes even kmaster
$ cat pod.yml
apiVersion: v1
kind: Pod
metadata:
name: ping
spec:
containers:
- name: ping-container
image: alpine:latest
command: ["/bin/ping", "10.244.189.2"]
securityContext:
capabilities:
add:
- NET_RAW
drop:
- ALL

$ kubectl apply -f pod.yml
$ kubectl logs pod/ping
PING 10.244.189.2 (10.244.189.2): 56 data bytes
64 bytes from 10.244.189.2: seq=0 ttl=62 time=0.994 ms
64 bytes from 10.244.189.2: seq=1 ttl=62 time=0.521 ms
64 bytes from 10.244.189.2: seq=2 ttl=62 time=0.562 ms
64 bytes from 10.244.189.2: seq=3 ttl=62 time=0.532 ms
64 bytes from 10.244.189.2: seq=4 ttl=62 time=0.553 ms

HERE we are able to ping Ip of master node.
@justmeandopensource Hope this will help in resolving issue.

Can't success installation

Hello,

k8s version:
Client Version: v1.18.3
Server Version: v1.18.3

velero version:
Client:
Version: v1.0.0
Git commit: 72f5cad

CentOS version:
CentOS Linux release 7.8.2003 (Core)

When I run velero install --provider aws --bucket kubedemo --secret-file ./minio.credentials --backup-location-config region=minio,s3ForcePathStyle=true,s3Url=http://10.0.10.131:9000

Error:
Deployment/velero: attempting to create resource
An error occurred:

Error installing Velero. Use kubectl logs deploy/velero -n velero to check the deploy logs: Error creating resource Deployment/velero: the server could not find the requested resource

Thanks

Nginx does not start in Vagrant/KVM

Thank you for your great tutorials! I have only i title problem: I tried your Vagrant configuration with the libvirt provider. Then I tried to deploy nginx like you did in your first Vagrant tutorial. But the Nginx pod keeps in state ContainerCreating.

(I only increased the node count, memory and CPU count.)

$ kubectl get nodes    
NAME                   STATUS   ROLES    AGE   VERSION
kmaster.example.com    Ready    master   46m   v1.19.3
kworker1.example.com   Ready    <none>   44m   v1.19.3
kworker2.example.com   Ready    <none>   42m   v1.19.3
kworker3.example.com   Ready    <none>   41m   v1.19.3
kworker4.example.com   Ready    <none>   39m   v1.19.3
$ kubectl get all
NAME                         READY   STATUS              RESTARTS   AGE
pod/nginx-6799fc88d8-2rrz4   0/1     ContainerCreating   0          23m

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        36m
service/nginx        NodePort    10.107.122.204   <none>        80:30661/TCP   23m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   0/1     1            0           23m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-6799fc88d8   1         1         0       23m
$ kubectl describe pod nginx-6799fc88d8-2rrz4
Name:           nginx-6799fc88d8-2rrz4
Namespace:      default
Priority:       0
Node:           kworker3.example.com/192.168.121.2
Start Time:     Mon, 19 Oct 2020 18:14:28 +0000
Labels:         app=nginx
                pod-template-hash=6799fc88d8
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/nginx-6799fc88d8
Containers:
  nginx:
    Container ID:   
    Image:          nginx
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qm9h9 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-qm9h9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qm9h9
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                     From               Message
  ----     ------                  ----                    ----               -------
  Normal   Scheduled               19m                     default-scheduler  Successfully assigned default/nginx-6799fc88d8-2rrz4 to kworker3.example.com
  Warning  FailedCreatePodSandBox  19m                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "01f272962bb529c4d498d7110b19b8971ee20414e135300457b1295284eda346" network for pod "nginx-6799fc88d8-2rrz4": networkPlugin cni failed to set up pod "nginx-6799fc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  19m                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "658290c20f5bb494ddcabaf866a7ec3dec79ca1256ff485501c850acdb1c4b9e" network for pod "nginx-6799fc88d8-2rrz4": networkPlugin cni failed to set up pod "nginx-6799fc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  19m                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3d0dace14522e62067d440f23348c0fd2d4461af7b988123ab481de904beab23" network for pod "nginx-6799fc88d8-2rrz4": networkPlugin cni failed to set up pod "nginx-6799fc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  19m                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7b8fd933e1328b58bb0602d035047fa5df88d4d7e46881bfc73d8d77673d9055" network for pod "nginx-6799fc88d8-2rrz4": networkPlugin cni failed to set up pod "nginx-6799fc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  19m                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0d5c11a9e0866df8d522b5973bc1a19a668ef305624a569ce331096126536eb6" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  19m                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9393a6bbcec331ca7db7fc2666066a0f4810beebda50ddff6f3ab49fbf68299c" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  19m                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d9a8a50208e2c7bffb151112172ca93d6aa793f3bc9ea0bb5c085ea536f4f086" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  19m                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5dcf80407c9fe872aeded96e85d218181e3a3d6d0cafc65813d5d6c41493e44d" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  19m                     kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b2301c51bd60ea9b1b4040a2996cf85d131a90bdc0b508d55934e06179db541c" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Normal   SandboxChanged          14m (x291 over 19m)     kubelet            Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  9m54s (x572 over 19m)   kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bf73c6459839b760bab30bf478fa9098b50831234d7e0failed to set up pod "nginx-6799fc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedSync              6m54s                   kubelet            error determining status: rpc error: code = Unknown desc = Error: No such container: 60e8819e7e706df609db629a1f04ac10d87b75b10184e90937cfca48f9cc272e
  Warning  FailedCreatePodSandBox  6m52s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f3db3350e70f2b6f017249356b065f2f62774f4ba21b86eb629a41e4b61b12fa" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m51s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ab77e3e5408c462b9cd68a7a065d35f21785a56e0544e9155bc32fab8219a604" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m50s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "20a1dd514cbb5584221a991d43acba84310f3798f50932bf3a05126d1c4f98ca" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m49s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "19088c0d39013e2f39d4aca8d08b76b1808b5f34bb9f7b615a0ee66de22812f8" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m48s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a748acd5b8e2817a645bb724850d4e1a1b38d4eb4a317117f403ba82f6447445" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m47s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ee2d4ebb9d511d7040aa2cc37ee7bee4f1dc65ed744af8dd8afb1241ef2115a5" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m46s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2c4b9501bba9d51c4512246b97e6fc2f45e798d6dd471ff0f1dd25c1655e146a" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m45s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6548ec0cda727324679c7a49b458f5c289772ea102ee29e812b4297cfc182549" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m44s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "76f4d50d01d17abe0f4d7ef8462f54079657a61e4bded510c63a1dc9d84cbafe" network forfc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Normal   SandboxChanged          6m41s (x12 over 6m52s)  kubelet            Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  114s (x279 over 6m43s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "59646b6d7785aaa3e63e70717b311bc9cdd2153c88045failed to set up pod "nginx-6799fc88d8-2rrz4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/

persistentvolumeclaim/pvc-nfs-pv1 pending

I am learning dynamic provisioning but couldn't make it to work properly. I have tried the original code provided by this repository and it's working but when I tried to make a minor change to docker image, it's stop working.
First I have done kubectl apply -f kubernetes/yamls/nfs-provisioner/rbac.yaml. Next I have done kubectl apply -f kubernetes/yamls/nfs-provisioner/class.yaml without any error. Then I applied the following deployment:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-deploy
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nginx-deploy
  template:
    metadata:
      labels:
        app: nginx-deploy
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: nginx 
          volumeMounts:
            - name: nfs-client-root
              mountPath: /usr/share/nginx/html
          env:
            - name: PROVISIONER_NAME
              value: example.com/nfs
            - name: NFS_SERVER
              value: 10.0.10.25
            - name: NFS_PATH
              value: /srv/nfs/kubedata
      volumes:
        - name: nfs-client-root
          nfs: 
            server: 10.0.10.25
            path: /srv/nfs/kubedata

Here is the output when I execute kubectl get pv,pvc:

NAME                                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/pvc-nfs-pv1   Pending                                      managed-nfs-storage   112s

Here is the output when I execute kubectl get pod,deployment:

NAME                                  READY   STATUS    RESTARTS   AGE
pod/nginx-deploy-5d68cc4f69-f7jnm     1/1     Running   0          6m20s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deploy     1/1     1            1           6m20s

Here is the output when I execute kubectl describe persistentvolumeclaim/pvc-nfs-pv1:

Name:          pvc-nfs-pv1
Namespace:     default
StorageClass:  managed-nfs-storage
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: example.com/nfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type    Reason                Age                  From                         Message
  ----    ------                ----                 ----                         -------
  Normal  ExternalProvisioning  9s (x25 over 5m51s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator

So, why the status still pending and doesn't get bound?

Multi Tenants and ingress resource - Isolating requests of one tenant to services deployed in their namesapce

Hi Venkat,

I am learning from your videos to use kubernetes and related features in my day to day work. I am using single node cluster (minikube) in windows. This will be moved to on prem servers after completion of application deployment setup in my local.

I am facing a situation as stated below
-------------------------- my setup
I have one application with few micro services. I have two tenants say 'abc' and 'xyz'. For each tenant, a separate namespace is created and the application (both UI and server side) is deployed in pods, in kubernetes cluster, in different namespaces.
I have one common ingress controller
I have two ingress resources, one for each namespace (I have done that even though I can have it in single ingress resource file). End points are same as it is only one application
I have two dbs (MySQL) for two tenants in two different databases
UI is accessed in browser using different urls (my_site_ip/app_name-tenant_name)
--------------------------------------------- outcome
requests are processed, but request raised by one tenant is processed by services deployed in other tenants namespace. I understand that the ingress resources are consolidated during startup and are used by ingress controller for path routing. In my case even though namespaces are different, end points of the application deployed in two different namespaces are same, because of which the one tenant request is processed by another tenants services. The problem with this is that db used is different from that of the tenant. I will put it like this, one persons earnings is deposited in another persons bank account.
------------------------------------------ what I want
Based on the context in the url ('app name-tenant name'), I would want the requests to be directed to specific namespace and use the namespace specific ingress resource and services deployed in that namespace and the db in that namespace. How to achieve this?

Kindly let me know if more info is needed.

Thanks,
Senthil

sudo kubectl get nodes - The connection to the server localhost:8080 was refused

Following this tutorial https://upcloud.com/community/tutorials/deploy-kubernetes-using-kubespray/, I was able to set up 1 master node and 3 worker nodes. After I was able to SSH to the master node, I ran the following command:
sudo kubectl get nodes.
However, instead of getting the list of nodes, I got the following error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I also checked /etc/kubernetes/admin.conf for admin.conf file if that was causing the error, the file does not exist. In fact, I couldn't find the /kubernets directory inside /etc directory. Why is there no kubernetes/admin.conf folder and file in the first place and how can I solve this issue? Or am I looking at wrong etc directory? My current etc directory is located at \\wsl$\Ubuntu\etc

Joining ubuntu node to centos cluster

Hello sir.. I have setup master on centos by following your video. But when I am trying to join a ubuntu machine as node, I am getting below error.

kubeadm join 10.1.90.159:6443 --token jybm8r.veqignykhisgnki3 --discovery-token-ca-cert-hash sha256:0a4d4025a9397ab3adfebeb4715b0fce297cd0104335d94edbd26e9f6eda608f

W0323 20:11:34.293171 2657 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
To see the stack trace of this error execute with --v=5 or higher

CoreDNS crashloopbackoff

Describe the bug
When I install k8s in Ubuntu 20, the CoreDNS always get error crashloopbackoff.
The version of docker is 19.03 and the version of k8s is 1.20.
The internet I use is calico. The Pod calico-kube-controllers also get this error.
I use this doc to install.
Is there some problem with ubuntu20, docker 19.03, k8s 1.20, calico installed together?
This bothers me long time and I can not find any right way to solve this problem.
I tried CentOS7, docker 19.03, k8s 1.20 ,calico and I installed successfully.

How To Reproduce

Expected behavior

Screenshots (if any)
image

Environment (please complete the following information):

Additional context
Add any other context about the problem here.

v1beta1 not supporting Deployment

$ kubectl create -f influxdb.yaml
service/monitoring-influxdb created
error: unable to recognize "influxdb.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"

kubeadm init cant run the cluster for me

i got error when i want to run cluster with this command:
kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all

because i live in iran and some url are blocked i have to use this proxy configuration to pull images and install package:
cat >>/etc/apt/apt.conf<<EOF
Acquire::http::Proxy "http://192.168.1.130:8080";
Acquire::https::Proxy "http://192.168.1.130:8080";
EOF

echo 'export HTTP_PROXY="192.168.1.130:8080"' >> /root/.bashrc
echo 'export HTTPS_PROXY="192.168.1.130:8080"' >> /root/.bashrc
echo 'export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.1.0/24' >> /root/.bashrc
source /root/.bashrc

mkdir -p /etc/systemd/system/containerd.service.d
cat >>/etc/systemd/system/containerd.service.d/http-proxy.conf<<EOF
[Service]
Environment="HTTP_PROXY=http://192.168.1.130:8080/"
Environment="NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.1.0/24"
EOF

systemctl daemon-reload
systemctl restart containerd
####################################################
output of init:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
            timed out waiting for the condition

    This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
            - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'

####################################################
output of kube-controller-manager container:
leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.56.115.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s": Forbidden port
####################################################

ut@ut:~/kubernetes/lxd-provisioning$ lxc list

+----------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| kmaster | RUNNING | 10.56.115.13 (eth0) | fd42:2c6f:521e:8938:216:3eff:febd:ce3d (eth0) | CONTAINER | 0 |
+----------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| kworker1 | RUNNING | 10.56.115.79 (eth0) | fd42:2c6f:521e:8938:216:3eff:fe0f:7af8 (eth0) | CONTAINER | 0 |
+----------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| kworker2 | RUNNING | 10.56.115.111 (eth0) | fd42:2c6f:521e:8938:216:3eff:fe4f:6f5b (eth0) | CONTAINER | 0 |
+----------+---------+----------------------+-----------------------------------------------+-----------+-----------+

lxd-provisioning/bootstrap_kube.sh After reboot rc.local don't get executed in Ubuntu

Hi,

If you reboot any Ubuntu machine, /etc/rc.local don't get executed, so /dev/kmsg does not exists, and kubelet fails to start.
I works ok doing this:

# Hack required to provision K8s v1.15+ in LXC containers
cat <<EOF> /etc/systemd/system/rc-local.service 
[Unit]
 Description=/etc/rc.local Compatibility
 ConditionPathExists=/etc/rc.local

[Service]
 Type=forking
 ExecStart=/etc/rc.local start
 TimeoutSec=0
 StandardOutput=tty
 RemainAfterExit=yes
 SysVStartPriority=99

[Install]
 WantedBy=multi-user.target
EOF

cat <<EOF> /etc/rc.local
#!/bin/bash
mknod /dev/kmsg c 1 11
EOF

chmod +x /etc/rc.local
systemctl enable rc-local > /dev/null 2>&1
systemctl start rc-local

kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory

Describe the bug

==> kmaster: Running provisioner: shell...
kmaster: Running: /var/folders/jr/kc1rdmj10jb4p1hrw77zttq00000gn/T/vagrant-shell20190425-22062-18oa9b9.sh
kmaster: [TASK 1] Update /etc/hosts file
kmaster: [TASK 2] Install docker container engine
kmaster: [TASK 3] Enable and start docker service
kmaster: [TASK 4] Disable SELinux
kmaster: [TASK 5] Stop and Disable firewalld
kmaster: [TASK 6] Add sysctl settings
kmaster: [TASK 7] Disable and turn off SWAP
kmaster: [TASK 8] Add yum repo file for kubernetes
kmaster: [TASK 9] Install Kubernetes (kubeadm, kubelet and kubectl)
kmaster: [TASK 10] Enable and start kubelet service
kmaster: [TASK 11] Enable ssh password authentication
kmaster: [TASK 12] Set root password
==> kmaster: Running provisioner: shell...
kmaster: Running: /var/folders/jr/kc1rdmj10jb4p1hrw77zttq00000gn/T/vagrant-shell20190425-22062-jr3ng8.sh
kmaster: [TASK 1] Initialize Kubernetes Cluster
kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory
kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory
kmaster: [TASK 3] Deploy flannel network
kmaster: -bash: kubectl: command not found
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
How To Reproduce

vagrant up
Expected behavior

Screenshots (if any)

Environment (please complete the following information):

Mac/vagrant/virtualbox

Additional context
Add any other context about the problem here.

Support custom k8s master LXC container name

Describe the bug
Naming the Kubernetes master LXC container anything other than kmaster prevents worker containers from joining the cluster. This is caused by line 91 in bootstrap-kube.sh where kmaster.lxd is hardcoded.

How To Reproduce
Name the master container something else than kmaster like k8s-master and create a worker container. Run the bootstrap script in both containers. Do lxc list and notice how the worker container does not have flannel.1 in the IPV4 column. kubectl get nodes also doesn't show the worker node.

Changing line 91 in the script to the name you have given the master container (k8s-master.lxd in my case) fixes the issue.

Expected behavior
Worker nodes should be able to join the cluster regardless of the master node container's name.

Screenshots (if any)

Environment (please complete the following information):
Ubuntu: 18.04.3
LXC/LXD: 3.0.3
kubectl client: 1.17.1
kubectl server: 1.17.1

Additional context
Fixing this issue is probably as simple as asking the user to enter the name they have given their LXC master container. I have tried to change the script, but I don't know how to implement user input without breaking cat bootstrap-kube.sh | lxc exec <NAME> bash. I have 0 experience with shell scripting.

EDIT: perhaps as an argument instead of user input? I'll give it a try later.

Cluster autoscaler for AWS Provider

Hi Venkat,

I was trying to deploy cluster autoscaler on my kubernetes cluster deployed by kubeadm. My reference blog attached below. I am encountering following issues after i installed with the help of helm . BYW, iam using AWS as my cloud provider.

Scenario1: Allowing cluster autoscaler to autodiscover ASG information.
Error:

E0624 11:30:31.468001 1 aws_manager.go:265] Failed to regenerate ASG cache: cannot autodiscover ASGs: RequestError: send request failed
caused by: Post "https://autoscaling.us-west-2.amazonaws.com/ ": dial tcp: i/o timeout
F0624 11:30:31.468022 1 aws_cloud_provider.go:382] Failed to create AWS Manager: cannot autodiscover ASGs: RequestError: send request failed
caused by: Post "https://autoscaling.us-west-2.amazonaws.com/ ": dial tcp: i/o timeout

scenario2: I added Asg information manually by specifying name
Error:
W0624 08:29:16.920026 1 aws_util.go:79] Error fetching https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/us-west-2/index.json skipping...
F0624 08:29:16.920062 1 aws_cloud_provider.go:358] Failed to generate AWS EC2 Instance Types: unable to load EC2 Instance Type list

Reference link which i took to deploy cluster autoscaler:
https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md

If you could record a video, it would be great :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.