Comments (9)
I think the ClusterVersion crd schema introduces the problem. It is not a good idea to put StatefulSet schema directly in CRD Spec, we should define Pod spec instead.
To quickly workaround the problem, please change the ClusterVersion CRD schema to a much simpler version shown as below. I hope this can unblock you at least for now.
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
labels:
controller-tools.k8s.io: "1.0"
name: clusterversions.tenancy.x-k8s.io
spec:
group: tenancy.x-k8s.io
names:
kind: ClusterVersion
plural: clusterversions
scope: Cluster
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
apiServer:
properties:
metadata:
type: object
service:
type: object
statefulset:
type: object
type: object
controllerManager:
properties:
metadata:
type: object
service:
type: object
statefulset:
type: object
type: object
etcd:
properties:
metadata:
type: object
service:
type: object
statefulset:
type: object
type: object
type: object
status:
type: object
type: object
version: v1alpha1
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
from cluster-api-provider-nested.
Another attempt is to remove your controller-gen, the make script will download controller-gen0.3.0 which seems to be working fine previously.
You can check the make file to see more tricks for manipulating the crd, e.g:
# To work around a known controller gen issue
# https://github.com/kubernetes-sigs/kubebuilder/issues/1544
ifeq (, $(shell which yq))
@echo "Please install yq for yaml patching. Get it from here: https://github.com/mikefarah/yq"
@exit
else
@{ \
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.apiServer.properties.statefulset.properties.spec.properties.template.properties.spec.properties.containers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.controllerManager.properties.statefulset.properties.spec.properties.template.properties.spec.properties.containers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.etcd.properties.statefulset.properties.spec.properties.template.properties.spec.properties.containers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.apiServer.properties.statefulset.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.controllerManager.properties.statefulset.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.etcd.properties.statefulset.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.apiServer.properties.service.properties.spec.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.controllerManager.properties.service.properties.spec.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.etcd.properties.service.properties.spec.properties.ports.items.required[1]" protocol;\
}
endif
from cluster-api-provider-nested.
Same result with controller-gen 0.3.0. I think the x-kubernetes-list-map-keys were added in 0.3.0 but at this time there was no validation in place. However, your workaround fixed the issue. Here is my first virtual cluster:
$ kubectl -n default-c16bb7-vc-sample-1 get all
NAME READY STATUS RESTARTS AGE
pod/apiserver-0 1/1 Running 0 5m26s
pod/controller-manager-0 1/1 Running 0 4m59s
pod/etcd-0 1/1 Running 0 5m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/apiserver-svc NodePort 10.90.147.83 <none> 6443:30133/TCP 5m26s
service/etcd ClusterIP None <none> <none> 5m50s
NAME READY AGE
statefulset.apps/apiserver 1/1 5m26s
statefulset.apps/controller-manager 1/1 5m
statefulset.apps/etcd 1/1 5m50s
Is there a way to enforce a specific runtimeClassName for pods with the syncer? This woudl be great to enforce tolerations and a container runtime like kata for pods running on the super cluster.
from cluster-api-provider-nested.
Forgot a make manifests
... works fine with controller-gen 0.3.0 and the workaround too :)
from cluster-api-provider-nested.
Is there a way to enforce a specific runtimeClassName for pods with the syncer? This woudl be great to enforce tolerations and a container runtime like kata for pods running on the super cluster.
If vPod specifies runtimeClassName to Kata, it should work. If you want to enforce/overwrite vPod runtimeClassTime to be fixed to Kata, you need to change the syncer code.
from cluster-api-provider-nested.
/retitle π Unable to create a VirtualCluster on k8 v1.20.2
from cluster-api-provider-nested.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
from cluster-api-provider-nested.
/remove-lifecycle stale
/lifecycle frozen
from cluster-api-provider-nested.
We have another issue creating VirtualCluster in 1.20, where we have apiserver v1.19:
{"level":"error","ts":1664969234.2211943,"logger":"controller-runtime.manager.controller.virtualcluster","msg":"Reconciler error","reconciler group":"tenancy.x-k8s.io","reconciler kind":"VirtualCluster","name":"test","namespace":"default","error":"VirtualCluster.tenancy.x-k8s.io \"test\" is invalid: [status.reason: Invalid value: \"null\": status.reason in body must be of type string: \"null\", status.message: Invalid value: \"null\": status.message in body must be of type string: \"null\", status.phase: Invalid value: \"null\": status.phase in body must be of type string: \"null\"]"}
It is fixed in kubernetes/kubernetes#95423 and I will test conversion of fields to pointers shortly, to be compatible with 1.19 too fluid-cloudnative/fluid#1551 (comment)
from cluster-api-provider-nested.
Related Issues (20)
- β¨ Projected ServiceAccount Support HOT 24
- π Add ReadHeaderTimeout values HOT 1
- Resource already exists and the UID is different should not requeue HOT 10
- update (virtual cluster) validation webhook registration to support admission.../v1 HOT 9
- Support exposing single annotations/labels via env downward API
- Pod Checker occasionally deletes vPods unexpectedly HOT 2
- Consider extending conversations package to work with vNodes HOT 7
- π Pod Mutator has order requirements HOT 1
- Pod DWS support container Commands&Args update HOT 1
- β¨PersistentVolumeClaim support UWS status update HOT 4
- [VirtualCluster] Error creating: failed to list services from cluster xxxxx cache: service is not ready HOT 6
- β¨ Enhancement for virtual cluster DNS HOT 1
- π[VC] Failed to do port-forward for a pod in virtual cluster HOT 1
- β [VC] Why pod with nodeName is not supported for now? HOT 6
- Unable to init Cluster with the nested provider HOT 5
- Add Dedicated Node Support and Customized Scheduler in VirtualCluster using Customized Syncers HOT 6
- CAPI v1.5.0-beta.0 has been released and is ready for testing HOT 4
- CAPN doesn't seem to work outside of a kind scenario HOT 4
- CAPI v1.6.0-beta.0 has been released and is ready for testing HOT 4
- Cluster API Provider Nested is out of support HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cluster-api-provider-nested.