containercraft / kargo Goto Github PK
View Code? Open in Web Editor NEWKubeVirt Private Cloud Hypervisor
License: GNU General Public License v3.0
KubeVirt Private Cloud Hypervisor
License: GNU General Public License v3.0
On reboot, my node was failing to bring up the hostpath-provisioner
and kube-cni-linux-bridge
pods due to a change in the Multus clusterrolebinding. I'm not sure why it is happening but others have ran into this before k8snetworkplumbingwg/multus-cni#667
❯ oc get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cdi cdi-apiserver-78bcbcc8ff-768lf 1/1 Running 2 4d5h
cdi cdi-deployment-6ccdf4fb64-qj6m4 1/1 Running 2 4d5h
cdi cdi-operator-54d5bbbdd9-mhzcj 0/1 Completed 1 4d5h
cdi cdi-uploadproxy-649757bfb5-kjdbh 1/1 Running 2 4d5h
cert-manager cert-manager-57d89b9548-f4w6n 1/1 Running 2 4d5h
cert-manager cert-manager-cainjector-5bcf77b697-q9g8d 0/1 Completed 1 4d5h
cert-manager cert-manager-webhook-9cb88bd6d-ks6qf 1/1 Running 2 4d5h
cluster-network-addons bridge-marker-c9vcb 1/1 Running 2 4d5h
cluster-network-addons cluster-network-addons-operator-549b8f8966-rbxmf 0/1 Completed 1 4d5h
cluster-network-addons kube-cni-linux-bridge-plugin-4jhc4 0/1 Error 1 4d5h
cluster-network-addons kubemacpool-cert-manager-68f745946c-jjx8h 0/1 Completed 1 4d5h
cluster-network-addons kubemacpool-mac-controller-manager-868f5c6946-jrj9s 1/1 Running 2 4d5h
cluster-network-addons macvtap-cni-wwc95 1/1 Running 2 4d5h
cluster-network-addons multus-pgfff 1/1 Running 2 4d5h
cluster-network-addons nmstate-cert-manager-748d47479f-7thlt 0/1 Completed 1 4d5h
cluster-network-addons nmstate-handler-kt5zf 1/1 Running 2 4d5h
cluster-network-addons nmstate-webhook-7c56958777-4k6wf 1/1 Running 2 4d5h
cluster-network-addons nmstate-webhook-7c56958777-bhssb 1/1 Running 2 4d5h
cluster-network-addons ovs-cni-amd64-xhrhv 1/1 Running 2 4d5h
hostpath-provisioner hostpath-provisioner-j6qw6 0/1 Error 1 4d5h
hostpath-provisioner hostpath-provisioner-operator-b8bf65759-rjmhf 0/1 Completed 1 4d5h
kube-system calico-kube-controllers-8575b76f66-pvvmm 1/1 Running 2 4d5h
kube-system calico-node-9xhpj 1/1 Running 2 4d5h
kube-system coredns-8474476ff8-s8tw7 1/1 Running 2 4d5h
kube-system kube-apiserver-node1 1/1 Running 2 4d5h
kube-system kube-controller-manager-node1 1/1 Running 2 4d5h
kube-system kube-multus-ds-amd64-8pl5l 1/1 Running 2 4d5h
kube-system kube-multus-ds-dwrs5 1/1 Running 2 4d5h
kube-system kube-proxy-8595j 1/1 Running 2 4d5h
kube-system kube-scheduler-node1 1/1 Running 2 4d5h
kube-system nodelocaldns-t2twd 1/1 Running 2 4d5h
kubevirt virt-api-794854d7f4-4zk98 1/1 Running 2 4d5h
kubevirt virt-api-794854d7f4-shsh6 1/1 Running 2 4d5h
kubevirt virt-controller-974f9b54d-24kbl 1/1 Running 2 4d5h
kubevirt virt-controller-974f9b54d-vwg99 1/1 Running 2 4d5h
kubevirt virt-handler-v4sxv 1/1 Running 2 4d5h
kubevirt virt-operator-5c69b784bc-4bcnr 1/1 Running 2 4d5h
kubevirt virt-operator-5c69b784bc-fsbc9 1/1 Running 2 4d5h
with the event of
20m Warning FailedCreatePodSandBox pod/hostpath-provisioner-operator-bd4966b44-d6cm4 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_hostpath-provisioner-operator-bd4966b44-d6cm4_hostpath-provisioner_63b16bb7-626f-426b-8beb-d90f7c7b29d0_0(ef47ddf1e279a20bf3c914129f1b52cf5eddc1f88c5fb882570db79b33046cd2): Multus: [hostpath-provisioner/hostpath-provisioner-operator-bd4966b44-d6cm4]: error getting pod: pods "hostpath-provisioner-operator-bd4966b44-d6cm4" is forbidden: User "system:serviceaccount:kube-system:multus" cannot get resource "pods" in API group "" in the namespace "hostpath-provisioner"
Following along with the ticket, it does seem that the namespace
for the multus SA has been changed to cluster-network-addons
~
❯ kubectl get clusterrolebinding multus -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"multus"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"multus"},"subjects":[{"kind":"ServiceAccount","name":"multus","namespace":"kube-system"}]}
creationTimestamp: "2021-11-06T14:44:55Z"
labels:
app.kubernetes.io/component: network
app.kubernetes.io/managed-by: Helm
networkaddonsoperator.network.kubevirt.io/version: 0.58.2
prometheus.cnao.io: ""
name: multus
ownerReferences:
- apiVersion: networkaddonsoperator.network.kubevirt.io/v1
blockOwnerDeletion: true
controller: true
kind: NetworkAddonsConfig
name: cluster
uid: f9bd9f09-4c28-48eb-8bd7-0172b9d8c0ef
resourceVersion: "2132"
uid: 2c536bd8-810d-41f0-b810-3c24bf434eb2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: multus
subjects:
- kind: ServiceAccount
name: multus
namespace: cluster-network-addons
editing the value from cluster-network-addons
-> kube-system
allows the pods to be created
~
❯ oc get events -n hostpath-provisioner --sort-by=.metadata.creationTimestamp
LAST SEEN TYPE REASON OBJECT MESSAGE
9m17s Normal SandboxChanged pod/hostpath-provisioner-j6qw6 Pod sandbox changed, it will be killed and re-created.
9m18s Normal SandboxChanged pod/hostpath-provisioner-operator-b8bf65759-rjmhf Pod sandbox changed, it will be killed and re-created.
27m Normal ScalingReplicaSet deployment/hostpath-provisioner-operator Scaled up replica set hostpath-provisioner-operator-bd4966b44 to 1
27m Normal SuccessfulCreate replicaset/hostpath-provisioner-operator-bd4966b44 Created pod: hostpath-provisioner-operator-bd4966b44-d6cm4
27m Normal Scheduled pod/hostpath-provisioner-operator-bd4966b44-d6cm4 Successfully assigned hostpath-provisioner/hostpath-provisioner-operator-bd4966b44-d6cm4 to node1
...
26m Warning FailedCreatePodSandBox pod/hostpath-provisioner-operator-bd4966b44-d6cm4 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_hostpath-provisioner-operator-bd4966b44-d6cm4_hostpath-provisioner_63b16bb7-626f-426b-8beb-d90f7c7b29d0_0(0112e752706fd4205779caea7dac919f26926e70a9d77e8ed9d78192c163745c): Multus: [hostpath-provisioner/hostpath-provisioner-operator-bd4966b44-d6cm4]: error getting pod: pods "hostpath-provisioner-operator-bd4966b44-d6cm4" is forbidden: User "system:serviceaccount:kube-system:multus" cannot get resource "pods" in API group "" in the namespace "hostpath-provisioner"
...
7m5s Warning FailedCreatePodSandBox pod/hostpath-provisioner-operator-bd4966b44-d6cm4 (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_hostpath-provisioner-operator-bd4966b44-d6cm4_hostpath-provisioner_63b16bb7-626f-426b-8beb-d90f7c7b29d0_0(3d22dd7351133fe957b342b7a504bdac85c068169411c8ced07742c990b331a0): Multus: [hostpath-provisioner/hostpath-provisioner-operator-bd4966b44-d6cm4]: error getting pod: pods "hostpath-provisioner-operator-bd4966b44-d6cm4" is forbidden: User "system:serviceaccount:kube-system:multus" cannot get resource "pods" in API group "" in the namespace "hostpath-provisioner"
6m26s Normal AddedInterface pod/hostpath-provisioner-operator-b8bf65759-rjmhf Add eth0 [10.233.90.98/32] from cni0
6m23s Normal AddedInterface pod/hostpath-provisioner-j6qw6 Add eth0 [10.233.90.101/32] from cni0
6m22s Normal SuccessfulDelete replicaset/hostpath-provisioner-operator-b8bf65759 Deleted pod: hostpath-provisioner-operator-b8bf65759-rjmhf
6m22s Normal AddedInterface pod/hostpath-provisioner-operator-bd4966b44-d6cm4 Add eth0 [10.233.90.102/32] from cni0
6m22s Normal ScalingReplicaSet deployment/hostpath-provisioner-operator Scaled down replica set hostpath-provisioner-operator-b8bf65759 to 0
~
❯ oc get pods -n hostpath-provisioner
NAME READY STATUS RESTARTS AGE
hostpath-provisioner-j6qw6 1/1 Running 1 4d5h
hostpath-provisioner-operator-bd4966b44-d6cm4 1/1 Running 0 28m
Based on the response from the issue, this may be due to how we are installing things? Just wanted to report this with a patch in case others run into this.
Following the guide from https://github.com/ContainerCraft/100DaysOfHomelab/blob/046351075822905b0bf40ee02d9ca45f73822061/doc/KARGO.md , Kargo is failing to install for the reason below:
Error: rendered manifests contain a resource that already exists. Unable to
continue with install: ClusterRole "cdi-operator-cluster" in namespace ""
exists and cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key "app.kubernetes.io/managed-by":
must be set to "Helm"; annotation validation error: missing key
"meta.helm.sh/release-name": must be set to "kargo"; annotation validation
error: missing key "meta.helm.sh/release-namespace": must be set to "kargo"
~ 602ms
❯ export KUBECONFIG=~/.kube/kubespray
~ 1s
❯ helm install hostpath-provisioner ccio/hostpath-provisioner --namespace hostpath-provisioner --create-namespace
NAME: hostpath-provisioner
LAST DEPLOYED: Sat Nov 6 09:51:26 2021
NAMESPACE: hostpath-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Hostpath Provisioner uses node filesystem path /var/hpvolumes for pv storage
ABOUT:
Github:
- https://github.com/kubevirt/hostpath-provisioner
- https://github.com/kubevirt/hostpath-provisioner-operator
Examples:
- https://github.com/kubevirt/hostpath-provisioner/tree/main/examples
Check status of deployment:
~$ kubectl get po -n hostpath-provisioner
~$ kubectl get sc
Optional - Set default Storage Class:
~$ kubectl patch storageclass hostpath-provisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
❯ kubectl patch storageclass hostpath-provisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'}'class hostpath-provisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-def
storageclass.storage.k8s.io/hostpath-provisioner patched
~
❯
❯ helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
NAME: cert-managert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
LAST DEPLOYED: Sat Nov 6 09:52:16 2021rt-manager --namespace cert-manager --create-namespace --set installCRDs=true
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.6.1 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/
~ helm install cluster-network-addons ccio/cluster-network-addons --namespace cluster-network-addons --create-namespaceelm install cluster-network-addons ccio/cluster-network-addons --namespace cluster-network-addons --create-namespa
NAME: cluster-network-addons
LAST DEPLOYED: Sat Nov 6 09:53:42 2021
NAMESPACE: cluster-network-addons
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Network: Cluster Network Addons github.com/kubevirt/cluster-network-addons-operator
~
❯ kubectl create ns kubevirt
namespace/kubevirt created
~
❯ helm install kargo ccio/kargo --namespace kargo --create-namespace
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "cdi-operator-cluster" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kargo"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kargo"
Provision a base Fedora RPi4, disable the 4G ram limit, attempt executing kubespray against the new system
Provision a base Fedora RPi4, disable the 4G ram limit, attempt executing kubespray against the new system
Problem: many repetitive typed fields are required to deploy similar but different vm's
Solution: Capabilities like that of OpenShift's 'Template' CRD type allow for simple iterative recycling of yaml formats in ways that are very useful for kubevirt hypervisor utilization.
Question: Can the Template CRD be deployed on vanilla Kubernetes?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.