The install.sh script provides a convenient way to download K3s and add a service to systemd or openrc.
To install k3s as a service just run:
curl -sfL https://get.k3s.io | sh -
A kubeconfig file is written to /etc/rancher/k3s/k3s.yaml and the service is automatically started or restarted. The install script will install K3s and additional utilities, such as kubectl, crictl, k3s-killall.sh, and k3s-uninstall.sh, for example:
sudo kubectl get nodes
K3S_TOKEN is created at /var/lib/rancher/k3s/server/node-token on your server. To install on worker nodes we should pass K3S_URL along with K3S_TOKEN or K3S_CLUSTER_SECRET environment variables, for example:
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -
- criar namespace kubectl create namespace suse-k8s
kubectl apply -f pod.yaml
kubectl logs myapp-pod
kubectl get po -w
kubectl delete po myapp-pod
- we can launch random stuff, but this isn't repeatable
kubectl create deploy nginx --image=nginx:1.16-alpine
kubectl get deploy
kubectl get po
kubectl delete deploy/nginx
- launch again using kustomize templates
kubectl create deploy nginx --image=nginx:1.16-alpine --dry-run -o yaml > deployment/base/deployment.yaml
kubectl apply -k deployment/base
kubectl get deploy
kubectl get po
- describe pod, look at the image
- scale deployment manually
kubectl scale deploy/nginx --replicas=3
kubectl rollout status deploy/nginx
kubectl get deploy
kubectl get po
- upgrade with bad image
kubectl set image deploy/nginx nginx=nginx:1.17-alpne --record
kubectl rollout status deploy/nginx
kubectl get po
kubectl rollout undo deploy/nginx
- redo upgrade from manifest
kustomize build deployment/base
- edit base to change image and then apply
kubectl apply -k deployment/base
- how can we use this for different environments?
kustomize build deployment/overlay/staging
kustomize build deployment/overlay/production
kubectl apply -k deployment/overlay/staging
kubectl apply -k deployment/overlay/production
kubectl get deploy
kubectl get pods
- show ConfigMaps
- explain what they're for
- explain how they're generated by Kustomize
- they'll show up later
- show services listening as NodePort
- go look at them
curl -I training-a:<port>
- show
deployment/overlay/ingress/single/ingress.yaml
kubectl apply -k deployment/overlay/ingress/single
kubectl get ingress
- visit https://training-a.cl.monach.us
- what about multiple apps?
kustomize build deployment/overlay/ingress/fanout
kubectl apply -k deployment/overlay/ingress/fanout
- visit https://training-a.cl.monach.us (fail)
- visit https://training.cl.monach.us/nginx (works)
- deploy rancher-demo application
kustomize build rancher-demo/base
kubectl apply -k rancher-demo/base
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /opt/rancher:/var/lib/rancher rancher/rancher:v2.4.5
- Show how we would deploy an RKE cluster
- Import the
training-a
k3s cluster
- Clusters
- Authentication & Security
- Storage
- Projects
- Namespaces
- Catalogs
- CLI/API/Kubectl
- show workloads on running cluster
- edit them / delete them
- redeploy
monachus/rancher-demo
as a workload- expose port 8080
- put an Ingress in front of it
- use
training-a.cl.monach.us
- use
- scale it
- use the Hadoop example