一个cephrbd和kubernetes StorageClass集成示例,驱动镜像已经同步至阿里云镜像仓库,直连拉取无压力,需要ceph集群提前创建供kubernetes使用的资源池
kubectl create namespace ceph
将clusterID和monitors修改为你的ceph集群信息
将userID和userKey修改为您的ceph集群认证信息
将clusterID修改为您的ceph集群fsid,pool为您的ceph资源池名称,imageFeatures根据您的内核版本选择,一般为layering无需更改
kubectl -n ceph create -f .
如果需要配置为默认的StorageClass,请执行
kubectl patch storageclass csi-rbd-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-demo-pod
spec:
containers:
- name: web-server
image: docker.io/library/nginx:latest
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false