Git Product home page Git Product logo

carina's Introduction

Carina

License FOSSA Status

OpenSSF Best Practices

English | 中文

Background

Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applications are shifting into cloud native world, for example, modern databases and middlewares. However, both modern databases and its storage providers try to solve some common problems in their own way. For example, they both deal with data replications and consistency. This introduces a giant waste of both capacity and performance and needs more mantainness effort. And besides that, stateful applications strive to be more peformant, eliminating every possible latency, which is unavoidable for modern distributed storage systems. Enters carina.

Carina is a standard kubernetes CSI plugin. Users can use standard kubernetes storage resources like storageclass/PVC/PV to request storage media. The key considerations of carina includes:

  • Workloads need different storage systems. Carina will focus on cloudnative database scenario usage only.
  • Completely kubernetes native and easy to install.
  • Using local disks and group them as needed, user can provison different type of disks using different storage class.
  • Scaning physical disks and building a RAID as required. If disk fails, just plugin a new one and it's done.
  • Node capacity and performance aware, so scheduling pods more smartly.
  • Extremly low overhead. Carina sit besides the core data path and provide raw disk performance to applications.
  • Auto tiering. Admins can configure carina to combine the large-capacity-but-low-performant disk and small-capacity-but-high-performant disks as one storageclass, so user can benifit both from capacity and performance.
  • If nodes fails, carina will automatically detach the local volume from pods thus pods can be rescheduled.
  • Middleware runs on baremetals for decades. There are many valueable optimizations and enhancements which are definitely not outdated even in cloudnative era. Let carina be an DBA expert of the storage domain for cloudnative databases!

In short, Carina strives to provide extremely-low-latency and noOps storage system for cloudnative databases and be DBA expert of the storage domain in cloudnative era!

Running Environments

  • Kubernetes:(CSI_VERSION=1.5.0)

  • Node OS:Linux

  • Filesystems:ext4,xfs

  • If Kubelet is running in containerized mode, you need to mount the host /dev:/dev directory

  • Each node in the cluster has 1..N Bare disks, supporting SSDS and HDDS. (You can run the LSBLK --output NAME,ROTA command to view the disk type. If ROTA=1 is HDD,ROTA =0 is SSD.)

  • The capacity of a raw disk must be greater than 10 GB

  • If the server does not support the bcache kernel module, see FAQ, Modify yamL deployment

Kubernetes compatiblity

kubernetes v0.9 v0.9.1 v0.10 v0.11.0 v1.0
>=1.18 support support support support not released
>=1.25 nonsupport nonsupport nonsupport experimental not released

Carina architecture

Carina is built for cloudnative stateful applications with raw disk performance and ops-free maintainess. Carina can scan local disks and classify them by disk types, for example, one node can have 10 HDDs and 2 SSDs. Carina then will group them into different disk pools and user can request different disk type by using different storage class. For data HA, carina now leverages STORCLI to build RAID groups.

carina-arch

Carina components

It has three componets: carina-scheduler, carina-controller and carina-node.

  • carina-scheduler is an kubernetes scheduler plugin, sorting nodes based on the requested PV size、node's free disk space and node IO perf stats. By default, carina-scheduler supports binpack and spreadout policies.
  • carina-controller is the controll plane of carina, which watches PVC resources and maintain the internal logivalVolume object.
  • carina-node is an agent which runs on each node. It manage local disks using LVM.

Features

Quickstart

Install by shell

  • In this deployment mode, the image TAG is Latest. If you want to deploy a specific version of Carina, you need to change the image address
$ cd deploy/kubernetes
# install, The default installation is kube-system.
$ ./deploy.sh

# uninstall
$ ./deploy.sh uninstall

Install by helm3

  • Support installation of specified versions of Carina
helm repo add carina-csi-driver https://carina-io.github.io

helm search repo -l carina-csi-driver

helm install carina-csi-driver carina-csi-driver/carina-csi-driver --namespace kube-system --version v0.11.0

Upgrading

  • Uninstall the old version ./deploy.sh uninstall and then install the new version ./deploy.sh (uninstalling carina will not affect volume usage)

Contribution Guide

Blogs

Roadmap

Typical storage providers

NFS/NAS SAN Ceph Carina
typical usage general storage high performance block device extremly scalability high performance block device for cloudnative applications
filesystem yes yes yes yes
filesystem type NFS driver specific ext4/xfs ext4/xfs
block no yes yes yes
bandwidth standard standard high high
IOPS standard high standard high
latency standard low standard low
CSI support yes yes yes yes
snapshot no driver specific yes no
clone no driver specific yes not yet, comming soon
quota no yes yes yes
resizing yes driver specific yes yes
data HA RAID or NAS appliacne yes yes RAID
ease of maintainess driver specific multiple drivers for multiple SAN high maintainess effort ops-free
budget high for NAS high high low, using the extra disks in existing kubernetes cluster
others data migrates with pods data migrates with pods data migrates with pods binpack or spreadout scheduling policy
data doesn't migrate with pods
* inplace rebulid if pod fails

FAQ

Similar projects

Known Users

Welcome to register the company name in ADOPTERS.md

bocloud

Community

  • For wechat users

carina-wx

License

Carina is under the Apache 2.0 license. See the LICENSE file for details.

FOSSA Status

Code of Conduct

Please refer to our Carina Community Code of Conduct

carina's People

Contributors

antmoveh avatar bocloudofficial avatar carina-ci-bot avatar carlji avatar duanhongyi avatar fanhaouu avatar fossabot avatar guoguodan avatar hwdef avatar ninassl avatar redref avatar snwyc avatar wongearl avatar xichengliudui avatar zhangkai8048 avatar zhangzhenhua avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

carina's Issues

create deployment failed

执行完 ./deploy.sh install 以后,用下面的 yaml 文件创建 Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment
  labels:
    app: test-web-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-web-server
  template:
    metadata:
      labels:
        app: test-web-server
    spec:
      containers:
        - name: test-web-server
          image: nginx:latest
          imagePullPolicy: "IfNotPresent"

在创建 ReplicaSet 失败:

image

describe 这个 rs:

image

想知道如何解决这样的错误。谢谢🙏

容器内lsblk结果和宿主机上结果不一致

What happened:
查看csi-carina-node的日志,发现pod内的lsblk结果和宿主机不一致,从结果上来看没问题,但是日志会出现不是空盘的磁盘会有pv创建等错误日志 出现,因为磁盘已经被初始化了而出现错误。

pod内视角

[root@csi-carina-node-qhgnv /]# lsblk --pairs --paths --bytes --all --output NAME,FSTYPE,MOUNTPOINT,SIZE,STATE,TYPE,ROTA,RO,PKNAME
NAME="/dev/nbd3" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd15" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/rbd0" FSTYPE="" MOUNTPOINT="/var/lib/kubelet/pods/aab3b273-4521-43bc-805d-4f075622b1d6/volumes/kubernetes.io~csi/pvc-5813f718-2bb3-4a33-839b-68aa665f615f/mount" SIZE="10737418240" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd1" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd13" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/vdd" FSTYPE="" MOUNTPOINT="" SIZE="137438953472" STATE="" TYPE="disk" ROTA="1" RO="0" PKNAME=""
NAME="/dev/vdd1" FSTYPE="" MOUNTPOINT="" SIZE="137437904896" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vdd"
NAME="/dev/mapper/vg_hdd-thin--pvc--f9a35054--5652--4e4e--8a0c--5e7252eebe0a_tmeta" FSTYPE="" MOUNTPOINT="" SIZE="4194304" STATE="running" TYPE="lvm" ROTA="1" RO="0" PKNAME="/dev/vdd1"
NAME="/dev/mapper/vg_hdd-thin--pvc--f9a35054--5652--4e4e--8a0c--5e7252eebe0a-tpool" FSTYPE="" MOUNTPOINT="" SIZE="1073741824" STATE="running" TYPE="lvm" ROTA="1" RO="0" PKNAME="/dev/dm-1"
NAME="/dev/mapper/vg_hdd-thin--pvc--f9a35054--5652--4e4e--8a0c--5e7252eebe0a" FSTYPE="" MOUNTPOINT="" SIZE="1073741824" STATE="running" TYPE="lvm" ROTA="1" RO="1" PKNAME="/dev/dm-3"
NAME="/dev/mapper/vg_hdd-volume--pvc--f9a35054--5652--4e4e--8a0c--5e7252eebe0a" FSTYPE="" MOUNTPOINT="/var/lib/kubelet/pods/d5cd3f31-d6db-4d01-82b3-3ee42b647530/volumes/kubernetes.io~csi/pvc-f9a35054-5652-4e4e-8a0c-5e7252eebe0a/mount" SIZE="1073741824" STATE="running" TYPE="lvm" ROTA="1" RO="0" PKNAME="/dev/dm-3"
NAME="/dev/mapper/vg_hdd-thin--pvc--f9a35054--5652--4e4e--8a0c--5e7252eebe0a_tdata" FSTYPE="" MOUNTPOINT="" SIZE="1073741824" STATE="running" TYPE="lvm" ROTA="1" RO="0" PKNAME="/dev/vdd1"
NAME="/dev/mapper/vg_hdd-thin--pvc--f9a35054--5652--4e4e--8a0c--5e7252eebe0a-tpool" FSTYPE="" MOUNTPOINT="" SIZE="1073741824" STATE="running" TYPE="lvm" ROTA="1" RO="0" PKNAME="/dev/dm-2"
NAME="/dev/mapper/vg_hdd-thin--pvc--f9a35054--5652--4e4e--8a0c--5e7252eebe0a" FSTYPE="" MOUNTPOINT="" SIZE="1073741824" STATE="running" TYPE="lvm" ROTA="1" RO="1" PKNAME="/dev/dm-3"
NAME="/dev/mapper/vg_hdd-volume--pvc--f9a35054--5652--4e4e--8a0c--5e7252eebe0a" FSTYPE="" MOUNTPOINT="/var/lib/kubelet/pods/d5cd3f31-d6db-4d01-82b3-3ee42b647530/volumes/kubernetes.io~csi/pvc-f9a35054-5652-4e4e-8a0c-5e7252eebe0a/mount" SIZE="1073741824" STATE="running" TYPE="lvm" ROTA="1" RO="0" PKNAME="/dev/dm-3"
NAME="/dev/mapper/vg_hdd-pvc--8450d57d--8079--460d--a8e9--96c0a290a776" FSTYPE="" MOUNTPOINT="" SIZE="128849018880" STATE="running" TYPE="lvm" ROTA="1" RO="0" PKNAME="/dev/vdd1"
NAME="/dev/nbd11" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/vdb" FSTYPE="" MOUNTPOINT="" SIZE="214748364800" STATE="" TYPE="disk" ROTA="1" RO="0" PKNAME=""
NAME="/dev/vdb1" FSTYPE="" MOUNTPOINT="/var/lib/kubelet/pods/73fa2701-5844-4109-a362-531081a5d332/volume-subpaths/config/dashboard/1" SIZE="214746267648" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vdb"
NAME="/dev/nbd8" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd6" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd4" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/rbd1" FSTYPE="" MOUNTPOINT="/var/lib/kubelet/pods/54814ea4-44a1-41a7-a880-4058e841144f/volumes/kubernetes.io~csi/pvc-9df1c024-60d6-4537-87c4-13d76f5f1615/mount" SIZE="10737418240" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/loop0" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="loop" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd2" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd14" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/vde" FSTYPE="" MOUNTPOINT="" SIZE="137438953472" STATE="" TYPE="disk" ROTA="1" RO="0" PKNAME=""
NAME="/dev/vde1" FSTYPE="" MOUNTPOINT="" SIZE="137437904896" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vde"
NAME="/dev/nbd0" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd12" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/vdc" FSTYPE="" MOUNTPOINT="" SIZE="1073741824000" STATE="" TYPE="disk" ROTA="1" RO="0" PKNAME=""
NAME="/dev/vdc1" FSTYPE="" MOUNTPOINT="" SIZE="1073739726848" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vdc"
NAME="/dev/nbd9" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd10" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/vda" FSTYPE="" MOUNTPOINT="" SIZE="21474836480" STATE="" TYPE="disk" ROTA="1" RO="0" PKNAME=""
NAME="/dev/vda2" FSTYPE="" MOUNTPOINT="/var/log/carina" SIZE="20946354176" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vda"
NAME="/dev/vda1" FSTYPE="" MOUNTPOINT="" SIZE="524288000" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vda"
NAME="/dev/nbd7" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd5" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""

宿主机视角

[root@tj1-test04 ~]# lsblk --pairs --paths --bytes --all --output NAME,FSTYPE,MOUNTPOINT,SIZE,STATE,TYPE,ROTA,RO,PKNAME
NAME="/dev/nbd3" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd15" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd1" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd13" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd11" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/vdb" FSTYPE="" MOUNTPOINT="" SIZE="214748364800" STATE="" TYPE="disk" ROTA="1" RO="0" PKNAME=""
NAME="/dev/vdb1" FSTYPE="ext4" MOUNTPOINT="/home" SIZE="214746267648" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vdb"
NAME="/dev/nbd8" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd6" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd4" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/rbd1" FSTYPE="ext4" MOUNTPOINT="/home/kubelet/pods/f2544a37-8136-4c87-95bb-f4feb55c6668/volumes/kubernetes.io~csi/pvc-25997d72-ad5a-4d78-99a9-b6c09bd59d00/mount" SIZE="8589934592" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd2" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd14" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd0" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd12" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/vdc" FSTYPE="" MOUNTPOINT="" SIZE="1073741824000" STATE="" TYPE="disk" ROTA="1" RO="0" PKNAME=""
NAME="/dev/vdc1" FSTYPE="ext4" MOUNTPOINT="/home/work/ssd1" SIZE="1073739726848" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vdc"
NAME="/dev/nbd9" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd10" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/vda" FSTYPE="" MOUNTPOINT="" SIZE="21474836480" STATE="" TYPE="disk" ROTA="1" RO="0" PKNAME=""
NAME="/dev/vda2" FSTYPE="ext4" MOUNTPOINT="/" SIZE="20946354176" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vda"
NAME="/dev/vda1" FSTYPE="ext4" MOUNTPOINT="/boot" SIZE="524288000" STATE="" TYPE="part" ROTA="1" RO="0" PKNAME="/dev/vda"
NAME="/dev/nbd7" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""
NAME="/dev/nbd5" FSTYPE="" MOUNTPOINT="" SIZE="" STATE="" TYPE="disk" ROTA="0" RO="0" PKNAME=""

比如其中的/dev/vdc1磁盘,在容器内是显示空磁盘,但是实际上是已经有挂载点,正常不应该被自动发现而出现需要初始化的情况。

初始化日志

2022-02-23T14:23:20.860+0800    info    devicemanager/manager.go:309    eligible vg_ssd device /dev/vdc1

有这个日志说明就发现这个磁盘为可添加磁盘,
还有初始化错误日志

2022-02-23T14:23:20.905+0800    info    devicemanager/manager.go:162    vg:vg_ssd ,pvs:[/dev/vdc1]
2022-02-23T14:23:20.905+0800    info    exec/exec.go:303        Running command: pvs --noheadings --separator=, --units=b --nosuffix --unbuffered --nameprefixes
2022-02-23T14:23:20.948+0800    info    exec/exec.go:303        Running command: pvcreate /dev/vdc1
2022-02-23T14:23:20.991+0800    error   volume/volume.go:363    create pv failed /dev/vdc1
2022-02-23T14:23:20.991+0800    error   devicemanager/manager.go:169    add new disk failed vg: vg_ssd, disk: /dev/vdc1, error: exit status 5

What you expected to happen:
正常不应该由不同的磁盘lsblk结果,这可能会有意向不到的结果

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version: 0.9.1
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

节点资源够用,但是调度失败

What happened:
pod调度失败

storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-carina-sc
provisioner: carina.storage.io
parameters:
  carina.storage.io/disk-type: "vg_hdd"
reclaimPolicy: Delete
allowVolumeExpansion: true
# 创建pvc后立即创建pv,WaitForFirstConsumer表示被容器绑定调度后再创建pv
volumeBindingMode: WaitForFirstConsumer
mountOptions:

node注册资源信息

allocatable:
    carina.storage.io/vg_hdd: "117"
    carina.storage.io/vg_ssd: "117"
    cpu: 7600m
    ephemeral-storage: 196460520Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: "31225464193"
    mixer.io/ext-cpu: "6237"
    mixer.io/ext-memory: "0"
    mixer.kubernetes.io/ext-cpu: "5837"
    mixer.kubernetes.io/ext-memory: "9958435896"
    pods: "62"
  capacity:
    carina.storage.io/vg_hdd: "128"
    carina.storage.io/vg_ssd: "128"
    cpu: "8"
    ephemeral-storage: 196460520Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 32637492Ki
    mixer.io/ext-cpu: "6237"
    mixer.io/ext-memory: "0"
    mixer.kubernetes.io/ext-cpu: "5837"
    mixer.kubernetes.io/ext-memory: "9958435896"
    pods: "62"

csi configmap配置,这里我不需要自动发现,所以匹配没写

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: carina-csi-config
  namespace: kube-system
  labels:
    class: carina
data:
  config.json: |-
    {
      "diskSelector": [
        {
          "name": "vg_ssd" ,
          "policy": "LVM",
          "nodeLabel": "kubernetes.io/hostname"
        },
        {
          "name": "vg_hdd",
          "policy": "LVM",
          "nodeLabel": "kubernetes.io/hostname"
        }
      ],
      "diskScanInterval": "300",
      "schedulerStrategy": "spreadout"
    }

调度器日志

I0222 06:32:04.145738       1 storage-plugins.go:69] filter pod: carina-deployment-b6785745d-29ghc, node: tj1-kubekey-test07.kscn
I0222 06:32:04.145771       1 storage-plugins.go:130] mismatch pod: carina-deployment-b6785745d-29ghc, node: tj1-kubekey-test07.kscn, request: 1, capacity: 0

可以确定的是节点由足够资源

storageclass
What you expected to happen:
pod调度成功
How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version: 0.9.1
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

failed to update logicvolume's status in the first time

What happened:
2022-07-28T20:45:53.638+0800 error controllers/logicvolume_controller.go:98 Operation cannot be fulfilled on logicvolumes.carina.storage.io "pvc-4e7ac7b2-49a1-4511-93c9-ed6f4c321a37": the object has been modified; please apply your changes to the latest version and try again failed to create LV name pvc-4e7ac7b2-49a1-4511-93c9-ed6f4c321a37

What you expected to happen:
does not rely on retriy logic, successfully update logicvolume status

How to reproduce it:
create pv, you will see it

bring type-based configuration back

Is your feature request related to a problem?/Why is this needed
Currently, Carina split disk groups based on carina-config, which is manually created by user. Carina doesn't have a good default setting.

Describe the solution you'd like in detail
We should add a type based configuration as default setting. For example, different type of disks go into different groups.

When a node fails and a new PVC is successfully created, the newly created PVC may be deleted when pv's reclaimPolicy is Retain

What happened:
The node becomes not ready and the POD is deleted, triggering the failover logic of carina, but after a period of time, the newly created PVC will be deleted by node_controller.

What you expected to happen:
Newly created PVCs should not be deleted.

How to reproduce it:
1、kubectl create -f https://github.com/carina-io/carina/blob/main/examples/kubernetes/statefulset.yaml

2、storageclass's reclaimPolicy is Retain, For example:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-carina-sc
parameters:
carina.storage.io/disk-group-name: carina-vg-ssd
csi.storage.k8s.io/fstype: xfs
provisioner: carina.storage.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

3、down a node that statefulset's pod run on

4、wait failover,will create new PVC,old logicvolume and pv will keepd because Retain policy

5、wait about ten minutes,the new PVC will be deleted

使用helm 方式安装默认没有安装内核模块:dm-snapshot,dm_mirror,dm_thin_pool,bcache

What happened:

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version: 0.11.0
  • Kubernetes version (use kubectl version): v1.20.0
  • OS (e.g. from /etc/os-release): ubuntu
  • Kernel (e.g. uname -a): 4.19.128-microsoft-standard
  • Install tools: helm
  • Others:
    使用高于内核3.10版本安装节点node,init容器加载内核模块报错。

http://www.opencarina.io/ 挂了

What happened:

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

building users wall

  1. introduce a standard way to let user adding their logos in carina repo using PR
  2. show users wall on README

About the Extender Webhook scheduler features

/kind feature
/enhancement

The existing carina scheduling is extended in Framework V2 mode. The POD Scheduler field in the cluster must be Carina-Scheduler.
Now let's discuss whether we need to add an Extender Webhook Scheduler

①: The Extender Webhook Scheduler is not added
②: Add an Extender Webhook Scheduler, and keep the existing scheduler
③: Replace the existing scheduler with the Extender Webhook Scheduler.

add a new parameter to SC allowing pods with carina to migrate if host fails

Is your feature request related to a problem?/Why is this needed

Currently, the end user is responsible for allowing pods with carina to migrate to other nodes if host fails, by add a specific label. This introduces lots of troubles to end users. For example,

  • user need to understand this design then use it.
  • Many pods are created by operators, user can't inject label to it.

Describe the solution you'd like in detail

There are two ways out.

1#, add this label to every pod in carina webhook.
2#, add new parameter to carina sc and add this label to pod with this kind of sc in carina webhook.

I prefer 2# and we can make this parameter default to true.

bcache创建逻辑

请问bcache是如何创建的?我在代码中没有在找到这块逻辑。
测试环境无法成功创建bcache。

0.10版本,helm部署,部署时已启用bcache。

[root@182 ~]# lsmod | grep bcache
bcache                274432  0
crc64                  16384  1 bcache

carina-ndoe 报错

 Create with no support type  failed to create LV name pvc-5b074f0d-c0ff-46b5-b0b5-7c658e4980d4
{"level":"error","ts":1654150765.6952772,"logger":"controller.logicvolume","msg":"Reconciler error","reconciler group":"carina.storage.io","reconciler kind":"LogicVolume","name":"pvc-5b074f0d-c0ff-46b5-b0b5-7c658e4980d4","namespace":"default","error":"Create with no support type ","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/workspace/github.com/carina-io/carina/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/workspace/github.com/carina-io/carina/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227"}

多个Pod同时调度

如果多个Pod同时调度,但是scheduler存取的只是静态数据,比如:
1、scheduler对应节点A可用容量是2G;
2、PodA PodB均需要2G的资源,同时预选,均满足;PodA PodB均调度成功;
3、最终PodA启动成功,PodB容量不足应启动失败。
所以有没有考虑二次调度的问题?
类似于kubelet对memory等资源有一个二次check。

deploy problem

What happened:
when use gen_webhookca.sh,have error
What you expected to happen:
no err

How to reproduce it:
kubectl create secret generic ${secret}
--from-file=tls.key="${tmpdir}"/server-key.pem
--from-file=tls.crt="${tmpdir}"/server-cert.pem
--dry-run=true -o yaml |
kubectl -n ${namespace} apply -f -
fix --dry-run=client to --dry-run=true
Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

nodestorageresource's status is inconsistent with vg/disk situation

What happened:
nodestorageresource's status is inconsistent with vg/disk situation

What you expected to happen:
nodestorageresource's status should be consistent with vg/disk situation

How to reproduce it:
1、for configmap,add new device loop5-7

{
  "diskSelector": [
    {
      "name": "carina-vg-ssd" ,
      "re": ["/dev/vdb","/dev/vdd","loop0+","loop1+","loop3+","loop4+","loop5+","loop6+"],
      "policy": "LVM",
      "nodeLabel": "kubernetes.io/hostname"
    },
    {
      "name": "carina-raw" ,
      "re": ["/dev/vdc","loop2+","loop7+"],
      "policy": "RAW",
      "nodeLabel": "kubernetes.io/hostname"
    }
  ],
  "diskScanInterval": "300",
  "schedulerStrategy": "spreadout"
}

2、waiting about two minutes, then add new device loop5~7
[root@iZrj97dgwrb4i319c5ec0lZ opt]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 40G 0 disk
└─vda1 253:1 0 40G 0 part /
vdb 253:16 0 10G 0 disk
├─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1_tdata 252:1 0 3G 0 lvm
│ └─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1-tpool 252:2 0 3G 0 lvm
│ ├─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1 252:3 0 3G 1 lvm
│ └─carina--vg--ssd-volume--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1 252:4 0 3G 0 lvm /var/lib/kubelet/pods/8de7eb1a-53e3-
└─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51_tdata 252:6 0 3G 0 lvm
└─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51-tpool 252:7 0 3G 0 lvm
├─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51 252:8 0 3G 1 lvm
└─carina--vg--ssd-volume--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51 252:9 0 3G 0 lvm /var/lib/kubelet/pods/cfb07e2b-be66-
vdc 253:32 0 11G 0 disk
└─vdc2 253:34 0 5G 0 part
vdd 253:48 0 11G 0 disk
loop0 7:0 0 5G 0 loop
loop1 7:1 0 15G 0 loop
├─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1_tmeta 252:0 0 4M 0 lvm
│ └─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1-tpool 252:2 0 3G 0 lvm
│ ├─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1 252:3 0 3G 1 lvm
│ └─carina--vg--ssd-volume--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1 252:4 0 3G 0 lvm /var/lib/kubelet/pods/8de7eb1a-53e3-
└─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51_tmeta 252:5 0 4M 0 lvm
└─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51-tpool 252:7 0 3G 0 lvm
├─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51 252:8 0 3G 1 lvm
└─carina--vg--ssd-volume--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51 252:9 0 3G 0 lvm /var/lib/kubelet/pods/cfb07e2b-be66-
loop2 7:2 0 15G 0 loop
loop3 7:3 0 15G 0 loop
loop4 7:4 0 15G 0 loop
loop5 7:5 0 15G 0 loop
loop6 7:6 0 15G 0 loop
loop7 7:7 0 15G 0 loop

3、wait, exec vgs and pvs, devicemanager works fine
[root@iZrj97dgwrb4i319c5ec0lZ opt]# vgs
VG #PV #LV #SN Attr VSize VFree
carina-vg-ssd 7 4 0 wz--n- 95.97g 89.96g

[root@iZrj97dgwrb4i319c5ec0lZ opt]# pvs
PV VG Fmt Attr PSize PFree
/dev/loop1 carina-vg-ssd lvm2 a-- <15.00g <14.99g
/dev/loop3 carina-vg-ssd lvm2 a-- <15.00g <15.00g
/dev/loop4 carina-vg-ssd lvm2 a-- <15.00g <15.00g
/dev/loop5 carina-vg-ssd lvm2 a-- <15.00g <15.00g
/dev/loop6 carina-vg-ssd lvm2 a-- <15.00g <15.00g
/dev/vdb carina-vg-ssd lvm2 a-- <10.00g 3.99g
/dev/vdd carina-vg-ssd lvm2 a-- <11.00g <11.00g

4、nodestorageresource's status doesn't have loop5-7 device
disks:

  • name: vdc
    partitions:
    "2":
    last: 10738466815
    name: carina.io/c352dfc41cdd
    number: 2
    start: 5369757696
    path: /dev/vdc
    sectorSize: 512
    size: 11811160064
    udevInfo:
    name: vdc
    properties:
    DEVNAME: /dev/vdc
    DEVPATH: /devices/pci0000:00/0000:00:06.0/virtio3/block/vdc
    DEVTYPE: disk
    MAJOR: "253"
    MINOR: "32"
    SUBSYSTEM: block
    sysPath: /devices/pci0000:00/0000:00:06.0/virtio3/block/vdc
  • name: loop2
    path: /dev/loop2
    sectorSize: 512
    size: 16106127360
    udevInfo:
    name: loop2
    properties:
    DEVNAME: /dev/loop2
    DEVPATH: /devices/virtual/block/loop2
    DEVTYPE: disk
    MAJOR: "7"
    MINOR: "2"
    SUBSYSTEM: block
    sysPath: /devices/virtual/block/loop2
    syncTime: "2022-07-22T09:16:44Z"
    vgGroups:
  • lvCount: 4
    pvCount: 5
    pvName: /dev/loop4
    pvs:
    • pvAttr: a--
      pvFmt: lvm2
      pvFree: 4286578688
      pvName: /dev/vdb
      pvSize: 10733223936
      vgName: carina-vg-ssd
    • pvAttr: a--
      pvFmt: lvm2
      pvFree: 11806965760
      pvName: /dev/vdd
      pvSize: 11806965760
      vgName: carina-vg-ssd
    • pvAttr: a--
      pvFmt: lvm2
      pvFree: 16093544448
      pvName: /dev/loop1
      pvSize: 16101933056
      vgName: carina-vg-ssd
    • pvAttr: a--
      pvFmt: lvm2
      pvFree: 16101933056
      pvName: /dev/loop3
      pvSize: 16101933056
      vgName: carina-vg-ssd
    • pvAttr: a--
      pvFmt: lvm2
      pvFree: 16101933056
      pvName: /dev/loop4
      pvSize: 16101933056
      vgName: carina-vg-ssd
      vgAttr: wz--n-
      vgFree: 64390955008
      vgName: carina-vg-ssd
      vgSize: 70845988864

Pod Local bcache possible?

Is your feature request related to a problem?/Why is this needed
The ability for Carina to provide tiered storage using bcache is very powerful, especially in context of database operations. However, it currently requires data to reside at the node level rather than leveraging a combination of persistent storage at the pod level and ephemeral NVMe/SSD storage at the node level. This makes it very difficult to move pods to new nodes easily.

Describe the solution you'd like in detail
Would it be possible to construct the bcache volume within a pod so that it would utilize local node ephemeral NVMe/SSD disks but utilize a PV exposed at the pod level? This way, the persistent part of the bcache can move easily with the pod and the cache portion would be discarded and rebuilt once the pod has been rescheduled to a new node.

For example, in a GCP environment we can create a node with a local 375GB NVMe drive. As pods are scheduled to the node, a portion of the 375GB drive is allocated to the pod as a cache device (raw block device) as well as using a PV (raw block device) attached from the GCP persistent volume service. When the pod is initialized, the bcache device is created pod-local using the two attached block devices.

The benefit of this is the data is no longer node bound and the pods can be rescheduled easily to new nodes with their persistent data following. It would also enable resizing of individual PVs without worrying about how much disk space is attached at the node level.

Describe alternatives you've considered

  1. Just sticking with standard network attached PVs. This is not optimal for database operations since having local disk can significantly boost read/write performance.

  2. Try a homegrown version of this local bcache concept using TopoLVM (https://github.com/topolvm/topolvm) and network attached storage PVs.

  3. Also looked at using ZFS ARC but that also requires setting up our own storage layer rather than leveraging GCP, AWS, or Azure managed storage.

Additional context
This would have immediate use for Postgres and Greenplum running in kubernetes. The churn of rebuilding large data drives can be significant for clusters with frequent node terminations (spot instances).

同类型存储多VG支持

目前同类型的存储都会自动创建一个vg,因此后续都将会使用这个vg创建lv。而有些时候针对隔离要求较高的业务可能需要独占盘,或者其他原因需要将裸盘划分成多个vg。这样可能是不是支持用户配置vg数量以及对应的磁盘的信息比较合适呢?

multiple disks IO throttling

Typically, it's enough for each pod to have one carina PV for usage. But in some cases, pod may have multiple carina PV. Currently, carina can only throttle one disk's IO.

Better to support multiple disks IO throttling.

坏盘处理

当把所有磁盘加到一个vg后,假如坏其中一块盘
需要怎么处理、会影响当前节点所有pod?

make test has bug

What happened:
make test
What you expected to happen:

github.com/carina-io/carina/pkg/devicemanager [github.com/carina-io/carina/pkg/devicemanager.test]

pkg/devicemanager/manager_test.go:94:24: not enough arguments in call to NewDeviceManager
How to reproduce it:
dm := NewDeviceManager("localhost", nil, stopChan)
Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

节点NotReady之后,pod处于pending状态, pvc处于Terminating状态,然后恢复节点,然后pvc就消失了

What happened:
节点NotReady之后,pod处于pending状态, pvc处于Terminating状态,然后恢复节点,然后pvc就消失了
What you expected to happen:
节点NotReady之后,pod处于pending状态, pvc处于Terminating状态,然后恢复节点,pvc不丢失,pod重建
How to reproduce it:
前提条件:pod中不存在carina.storage.io/allow-pod-migration-if-node-notready这个注解
1)创建pvc,创建pod并使用此pvc
2)将pod所在节点置为notReady
3)观察pod以及pvc状态,这个可能需要等待一段时间,pvc会变成Terminating
4)重新恢复节点
5)观察到原来的pvc消失
Anything else we need to know?:
Carina版本为0.10.0
Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):1.23
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

csi-carina-provisioner无法部署成功

通过官方文档部署carina,相关组件状态如下(k8s版本:1.19.14):

> kubectl get pods -n kube-system |grep carina
carina-scheduler-c5bc859d4-5ncl5         1/1     Running             13         71m
csi-carina-node-rhxwc                    2/2     Running             1          71m
csi-carina-node-z4kpc                    2/2     Running             0          71m
csi-carina-provisioner-b54f4b965-m96vn   0/4     ContainerCreating   0          55m
csi-carina-provisioner-b54f4b965-pr7bc   0/4     Evicted             0          71m

发现csi-carina-provisioner组件没有成功部署,分别查看两个问题pod:

  1. 先看那个evicted的pod
> kubectl describe pod csi-carina-provisioner-b54f4b965-pr7bc -n kube-system
Name:           csi-carina-provisioner-b54f4b965-pr7bc
Namespace:      kube-system
Priority:       0
...... # 省略其他信息
Events:
  Type     Reason               Age                 From               Message
  ----     ------               ----                ----               -------
  Normal   Scheduled            72m                 default-scheduler  Successfully assigned kube-system/csi-carina-provisioner-b54f4b965-pr7bc to u20-m1
  Warning  FailedMount          70m                 kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[carina-csi-controller-token-6fb75 config certs socket-dir]: timed out waiting for the condition
  Warning  FailedMount          65m                 kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[certs socket-dir carina-csi-controller-token-6fb75 config]: timed out waiting for the condition
  Warning  FailedMount          64m (x12 over 72m)  kubelet            MountVolume.SetUp failed for volume "certs" : secret "mutatingwebhook" not found
  Warning  Evicted              64m                 kubelet            The node was low on resource: ephemeral-storage.
  Warning  ExceededGracePeriod  64m                 kubelet            Container runtime did not kill the pod within specified grace period.
  Warning  FailedMount          63m (x2 over 68m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[socket-dir carina-csi-controller-token-6fb75 config certs]: timed out waiting for the condition
  Warning  FailedMount          68s (x31 over 62m)  kubelet            MountVolume.SetUp failed for volume "certs" : object "kube-system"/"mutatingwebhook" not registered
  1. 再看那个ContainerCreating的pod
> kubectl describe po  csi-carina-provisioner-b54f4b965-m96vn -n kube-system
Name:           csi-carina-provisioner-b54f4b965-m96vn
Namespace:      kube-system
Priority:       0
...... # 省略其他信息
Events:
  Type     Reason       Age                 From               Message
  ----     ------       ----                ----               -------
  Normal   Scheduled    56m                 default-scheduler  Successfully assigned kube-system/csi-carina-provisioner-b54f4b965-m96vn to u20-w1
  Warning  FailedMount  43m (x2 over 54m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[carina-csi-controller-token-6fb75 config certs socket-dir]: timed out waiting for the condition
  Warning  FailedMount  40m (x2 over 45m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[config certs socket-dir carina-csi-controller-token-6fb75]: timed out waiting for the condition
  Warning  FailedMount  20m (x7 over 52m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[socket-dir carina-csi-controller-token-6fb75 config certs]: timed out waiting for the condition
  Warning  FailedMount  18m (x3 over 47m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[certs socket-dir carina-csi-controller-token-6fb75 config]: timed out waiting for the condition
  Warning  FailedMount  85s (x35 over 56m)  kubelet            MountVolume.SetUp failed for volume "certs" : secret "mutatingwebhook" not found

对比两个pod的event,有共同的错误信息:

MountVolume.SetUp failed for volume "certs" : secret "mutatingwebhook" not found

希望能成功部署后试用carina,还请协助,谢谢。

You can cancel the creation of a thin-pool volume

Is your feature request related to a problem?/Why is this needed
Carina storage volume management method is discussed

Describe the solution you'd like in detail
In early Carina designs, volumes were planned to be stored in snapshots, hence the design of storage volumes Thin-Pool, where each LV was wrapped by a Thin-pool. In fact, the snapshot function was abandoned in the later development of Carina

Describe alternatives you've considered
In the performance test, the performance of the volume wrapped by Thin-pool is lower than that of the pure volume. Do we want to support the removal of Thin-pool? It is important to note that the original volume management method is fully compatible even without Thin-pool

Additional context

thin

take a error when run make docker-build

What happened:
=> ERROR [builder 6/6] RUN cd /workspace/github.com/carina-io/carina/cmd/carina-controller && go build -ldflags="-X main.gitCommitID=git rev-parse HEAD" -gcflags '-N -l' -o /tmp/carina-controller . 1.5s

[builder 6/6] RUN cd /workspace/github.com/carina-io/carina/cmd/carina-controller && go build -ldflags="-X main.gitCommitID=git rev-parse HEAD" -gcflags '-N -l' -o /tmp/carina-controller .:
#14 0.990 go: inconsistent vendoring in /workspace/github.com/carina-io/carina:
#14 0.990 github.com/fsnotify/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/gogo/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/golang/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/labstack/echo/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/natefinch/[email protected]+incompatible: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/onsi/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/onsi/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/prometheus/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/spf13/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/spf13/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 github.com/stretchr/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 go.uber.org/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 golang.org/x/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 golang.org/x/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 golang.org/x/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 google.golang.org/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 google.golang.org/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 k8s.io/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 k8s.io/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 k8s.io/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 k8s.io/klog/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 k8s.io/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 k8s.io/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990 sigs.k8s.io/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
#14 0.990
#14 0.990 To ignore the vendor directory, use -mod=readonly or -mod=mod.
#14 0.990 To sync the vendor directory, run:
#14 0.990 go mod vendor


Dockerfile:14

12 | RUN echo Commit: git log --pretty='%s%b%B' -n 1
13 | RUN cd $WORKSPACE/cmd/carina-node && go build -ldflags="-X main.gitCommitID=git rev-parse HEAD" -gcflags '-N -l' -o /tmp/carina-node .
14 | >>> RUN cd $WORKSPACE/cmd/carina-controller && go build -ldflags="-X main.gitCommitID=git rev-parse HEAD" -gcflags '-N -l' -o /tmp/carina-controller .
15 |
16 | FROM registry.cn-hangzhou.aliyuncs.com/antmoveh/centos-mutilarch-lvm2:runtime-202112

error: failed to solve: process "/bin/sh -c cd $WORKSPACE/cmd/carina-controller && go build -ldflags="-X main.gitCommitID=git rev-parse HEAD" -gcflags '-N -l' -o /tmp/carina-controller ." did not complete successfully: exit code: 1
make: *** [Makefile:73: docker-build] Error 1
What you expected to happen:

How to reproduce it:

ENV GOMODCACHE=$WORKSPACE/vendor

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

咋们那个故障转移的功能非常不错,但是存在一些问题

咋们目前的逻辑是先强制删除POD,然后再删除PVC,这个逻辑有俩个潜在问题。

第一个问题:删除POD后,由于工作负载的作用,会立马创建新的POD,而这个新的POD会进入调度流程。
假如此时PVC还没有被删除,由于PV亲和性约束,那么该POD就会处于Pending状态,而且由于pvc为POD所用,此时是无法成功删除PVC的,迁移功能失效。因此,应该避免POD受之前PVC影响。

第二个问题:删除PVC后,就不停尝试新建PVC,没有考虑类似statefulset 中PVC Template的场景(删除POD后,对应的工作负载会自动创建PVC)。因此,新建PVC之前,应该先查询一下是否存在状态正常的PVC。

针对第一个问题,有俩种改进策略:
一、先patch PVC,去除掉pvc protection的finallizer,然后再删除PVC,然后再创建新的PVC;
二、先删除PVC,然后再删除POD,之后PVC会被清理回收,最后再创建新的PVC;

针对第二个问题,新建PVC之前,应该先查询一下是否存在状态正常的PVC。

支持已有vg不自动发现功能

Is your feature request related to a problem?/Why is this needed

当节点存在已有的vg时,支持不自动发现磁盘的能力,能通过配置的方式能将现有的vg容量成功上报并完成pvc分配。针对已经存在vg,每个节点的磁盘匹配方式可能不一样,有可能没有一个合适的正则能满足已有vg匹配。所以期望自动发现磁盘匹配能力可以关闭。
Describe the solution you'd like in detail

有没有可能做一个磁盘扫描的开关,针对已有vg支持关闭扫描功能
Describe alternatives you've considered

Additional context

pv没有mount到容器

使用k8s版本 1.16,部署controller报错后降级了sidecar csi-provisioner => v1.6.1
未使用carina-scheduler,使用default scheduler绑定节点执行carina测试demo
问题:deployment部署成功后进入容器,/var/lib/www/html 没有mount

deploy文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: carina-deployment
  namespace: carina
  labels:
    app: web-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-server
  template:
    metadata:
      annotations:
        cni: macvlan
      labels:
        app: web-server
    spec:
      nodeSelector:
        kubernetes.io/hostname: 172.23.36.5
      containers:
        - name: web-server
          image: nginx:latest
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: mypvc
              mountPath: /var/lib/www/html
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: csi-carina-pvc
            readOnly: false

集群内pv,lv,sc
image

kubelet 日志
kubelet.log
carina-node 日志
carina-node.log

进入carina-node容器执行命令

root@shylf-t-k8s-node-18:~ # docker exec -it 0e8b504934d8 bash
[root@shylf-t-k8s-node-18 /]# df -h
Filesystem                                                   Size  Used Avail Use% Mounted on
overlay                                                      893G   26G  867G   3% /
udev                                                          63G     0   63G   0% /dev
shm                                                           64M     0   64M   0% /dev/shm
/dev/sdb                                                     893G   26G  867G   3% /csi
tmpfs                                                         13G  1.4G   12G  11% /run/mount
/dev/sda1                                                     47G  7.5G   40G  16% /var/log/carina
tmpfs                                                         63G     0   63G   0% /sys/fs/cgroup
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/c6a9e39c-8f82-4738-9629-f4a662fd88bc/volumes/kubernetes.io~secret/default-token-zstfk
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/dfde4bea-cd81-4ec0-ae1b-3d0960fb6cc1/volumes/kubernetes.io~secret/default-token-zstfk
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/86944153-3c9b-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-gfsdw
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/4f66c584-3e17-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-dj45d
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/89e4ef0e-58b0-4d09-81aa-21019b58f8b4/volumes/kubernetes.io~secret/volcano-controllers-token-5k4kn
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/4aed7639-44a3-4db2-9d43-153fba35028a/volumes/kubernetes.io~secret/kruise-daemon-token-khwgc
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/4d638e6a-9fa8-4ec7-a010-69798c89bcc4/volumes/kubernetes.io~secret/kubecost-kube-state-metrics-token-czcv2
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/98b87ba2-cf4e-45b0-8962-65a4707a5791/volumes/kubernetes.io~secret/kubecost-prometheus-node-exporter-token-4xgxv
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/ddc94974-7a4e-4d0d-ae7f-9d6af689dfcd/volumes/kubernetes.io~secret/default-token-zstfk
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/8f78b4fb-4092-4630-8b93-ca815561607f/volumes/kubernetes.io~secret/default-token-p54ss
tmpfs                                                         63G     0   63G   0% /var/lib/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~empty-dir/socket-dir
tmpfs                                                         63G  8.0K   63G   1% /var/lib/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/certs
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/carina-csi-controller-token-lcrp9
tmpfs                                                         63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/9c9efec6-fdcc-4a18-aff5-dfb8979564be/volumes/kubernetes.io~secret/default-token-6lz94
/dev/carina/volume-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1  6.0G   33M  6.0G   1% /data/docker/kubelet/pods/9c9efec6-fdcc-4a18-aff5-dfb8979564be/volumes/kubernetes.io~csi/pvc-c01289bc-5188-436b-90fc-6d32cab20fc1/mount
tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/d5ddd88d-8e11-4211-8d0f-c1982787d1d9/volumes/kubernetes.io~secret/default-token-zstfk
[root@shylf-t-k8s-node-18 /]# lvs
  LV                                              VG            Attr       LSize  Pool                                          Origin Data%  Meta%  Move Log Cpy%Sync Convert
  mylv                                            carina-vg-hdd -wi-a----- 10.00g
  thin-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1   carina-vg-hdd twi-aotz--  6.00g                                                      99.47  47.85
  volume-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1 carina-vg-hdd Vwi-aotz--  6.00g thin-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1        99.47
[root@shylf-t-k8s-node-18 /]# pvs
  PV                  VG            Fmt  Attr PSize   PFree
  /dev/mapper/loop0p1 carina-vg-hdd lvm2 a--  <19.53g <19.52g
  /dev/mapper/loop1p1 carina-vg-hdd lvm2 a--   24.41g   8.40g
  /dev/mapper/loop2p1 carina-vg-hdd lvm2 a--   29.29g  29.29g
[root@shylf-t-k8s-node-18 /]# vgs
  VG            #PV #LV #SN Attr   VSize  VFree
  carina-vg-hdd   3   3   0 wz--n- 73.23g 57.21g

进入nginx容器

root@shylf-t-k8s-node-18:~ # docker exec -it b3a711b9349c bash
root@carina-deployment-bc8959776-vln75:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay         893G   26G  867G   3% /
tmpfs            64M     0   64M   0% /dev
tmpfs            63G     0   63G   0% /sys/fs/cgroup
/dev/sdb        893G   26G  867G   3% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs            63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs            63G     0   63G   0% /proc/acpi
tmpfs            63G     0   63G   0% /sys/firmware
root@carina-deployment-bc8959776-vln75:/#

宿主机
df -h

root@shylf-t-k8s-node-18:~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             63G     0   63G   0% /dev
tmpfs            13G  1.4G   12G  11% /run
/dev/sda1        47G  7.5G   40G  16% /
tmpfs            63G  460K   63G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            63G     0   63G   0% /sys/fs/cgroup
/dev/sda3       165G   32G  133G  20% /data
/dev/sdb        893G   26G  867G   3% /data/docker
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/c6a9e39c-8f82-4738-9629-f4a662fd88bc/volumes/kubernetes.io~secret/default-token-zstfk
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/dfde4bea-cd81-4ec0-ae1b-3d0960fb6cc1/volumes/kubernetes.io~secret/default-token-zstfk
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/86944153-3c9b-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-gfsdw
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/4f66c584-3e17-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-dj45d
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/89e4ef0e-58b0-4d09-81aa-21019b58f8b4/volumes/kubernetes.io~secret/volcano-controllers-token-5k4kn
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/4aed7639-44a3-4db2-9d43-153fba35028a/volumes/kubernetes.io~secret/kruise-daemon-token-khwgc
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/4d638e6a-9fa8-4ec7-a010-69798c89bcc4/volumes/kubernetes.io~secret/kubecost-kube-state-metrics-token-czcv2
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/98b87ba2-cf4e-45b0-8962-65a4707a5791/volumes/kubernetes.io~secret/kubecost-prometheus-node-exporter-token-4xgxv
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/ddc94974-7a4e-4d0d-ae7f-9d6af689dfcd/volumes/kubernetes.io~secret/default-token-zstfk
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/8f78b4fb-4092-4630-8b93-ca815561607f/volumes/kubernetes.io~secret/default-token-p54ss
tmpfs            63G     0   63G   0% /data/docker/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~empty-dir/socket-dir
tmpfs            63G  8.0K   63G   1% /data/docker/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/certs
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/carina-csi-controller-token-lcrp9
overlay         893G   26G  867G   3% /data/docker/overlay2/a216d4228c9c3c045c6e4855906444b74ff39a9ee23ec0fb9dd1882aacf2ebf0/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/526b8df3824b74887b7450dfab234ebf109c1bb7e500f866987ce9039269d3d0/merged
shm              64M     0   64M   0% /data/docker/containers/caa29edd3a5b8e138e37bcf91a95adead757dbf460deb3f7b745a7dfc0c93de7/mounts/shm
shm              64M     0   64M   0% /data/docker/containers/064c3b334433a68749cd477d1db14cbeb6104dbade3286ebcfc3ea633701233c/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/71e132c2b079223cbc62f4a88de131809c4b7f1611fbcc7719abc4bd46654c87/merged
shm              64M     0   64M   0% /data/docker/containers/531d44a6048b2ce7e1d2f6a61604ecdecdb906f38ef90e47594665029a3583a7/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/19653af8958402eefff0a01b1f8a8c676bfefc9617207e6fe71eba3bda5d1d46/merged
shm              64M     0   64M   0% /data/docker/containers/a86dd2b0d1e169680ec3cef8ba3a357b1ea2766d39a920290e9fdc3a6fca865e/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/544bda05f222b55d1126e5f51c1c7559f8db819ab02ea4eb633635d408353e84/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/8cc47db16916c53d3324ad4b8fd251036808602fbe837353c5f70e71efa4d2f4/merged
shm              64M     0   64M   0% /data/docker/containers/1542e464b9ffa9488478962415ec61589aef02d02f7ceee381837c943772a4ef/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/667083a79276e053ab38db18d459596ebe89aea07bf72897e8cd3d9154f2cb0d/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/541d373f498fd0aae9732569dc9ceb3d5edbf395da34153ce31daca5a6637814/merged
shm              64M     0   64M   0% /data/docker/containers/617dfa1afde0d1ca3a0dfe17ea96a27ec0ab8ee2536be344a0f31d5d17a76ae3/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/16fae5669dcb9f44aee19b42a340acacede6fdb41f610f178f71785a0bab1d6d/merged
shm              64M     0   64M   0% /data/docker/containers/4d7b8c7cb079752cd1c2cfcf5ac3d55997696273fc957e286481b923add98b69/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/6e4bd0e7003ffc089171f35c347c7e35f5b39e3c81c48740e09caf2f838f6e0b/merged
shm              64M     0   64M   0% /data/docker/containers/385987b7f5071e0119c4e1cd67cff21a48898be2252e2fe063102ec10cee42fc/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/703de9e6fa8465ab8cc79c7aac019c9e8cb5bf031352b483d92c4061f6afe64b/merged
shm              64M     0   64M   0% /data/docker/containers/78c97664414f17c3d2a4b3b3192681793da4fb47e45f4e192761d30a710ac78d/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/7a2f7ac15692e724c2da318427ddacc11badd6dee13bc58eac51aa349ac0c1da/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/69d4eaf83bc1c0fde95f0fbfdaaf6293b8166ffec78a86b0f287ef3bd9793b47/merged
shm              64M     0   64M   0% /data/docker/containers/c328c93dfcedad046375a6d5c7ae61c159b4a1ccbfabd6cf84ede72fc3af5b80/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/2b66361e05449c67b58666579f7bc763012ed1722599cfcc853adeb91b6eeffe/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/98a9216bf26b3f6fb4e01205e69c6a61fa3946c0e0d4a2ee3cd0166e66921bb5/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/55848bcb70e3f7d69e033ff0279848a1dde960a64e37d738d9dbe7899d6c34e2/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/42e90451be6a7ec4dc9345764e8079d3beee8b205402e90d7db09fa02a260f34/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/fb7a968851ca0f2832fbc7d515c5676ffeb52ba8b63d874c46ef29d43f763d82/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/f404fdff1634045c92e58ea95536fbd7427b295881e4a969c94af608e734aa15/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/3e80823a44e5d21f1138528f30a5e7df33af63e8f6b35706a7ae392fecc59db6/merged
tmpfs            13G     0   13G   0% /run/user/0
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/5cd34f3e-8f5d-402e-ac45-5129ccc89dea/volumes/kubernetes.io~secret/carina-csi-node-token-mr2fk
overlay         893G   26G  867G   3% /data/docker/overlay2/d7e31342404d08d5fd4676d41ec7aaaf3d9ee5d8f98c1376cad972613c93a0ac/merged
shm              64M     0   64M   0% /data/docker/containers/939cdcc03f8e7b986dbe981eaa895de4d25adc0021a5e81cd144b9438adb85f3/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/506e0d518ad983e10a29a2aed73707bdea0f40f70c85408fe5a326ed1e87220b/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/3b5df922c0ce360e132b56c70407fe3c49b991c6bf277af05a06a3533ee985a5/merged
overlay         893G   26G  867G   3% /data/docker/overlay2/5125665eab4d1ed3b046f766588d83576c20e36dd32984520b5a0f852e407d3f/merged
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/9c9efec6-fdcc-4a18-aff5-dfb8979564be/volumes/kubernetes.io~secret/default-token-6lz94
overlay         893G   26G  867G   3% /data/docker/overlay2/02245acd4ae110d14e805b69ce6fb589d391f9faee669a7659224a6c74c9b30d/merged
shm              64M     0   64M   0% /data/docker/containers/d6299df3d906b1495e81dc09ba54ea05cac467e4b5f87ae2f8edc8e09b31fe65/mounts/shm
overlay         893G   26G  867G   3% /data/docker/overlay2/467f745acd8f320de388690fa330bebf9601570cc199326bde64ba2dd16f0b52/merged
tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/5525ffff-228f-403c-8eb3-9fa3764f6779/volumes/kubernetes.io~secret/default-token-zstfk
overlay         893G   26G  867G   3% /data/docker/overlay2/fce3315104b4a463a8eeba2c57d418e59d82425bdf935dc44c7af9fd4dc7a017/merged
shm              64M     0   64M   0% /data/docker/containers/ab3ed3e62ad99a7bb4b62312757e7c527f3385e30bb270c80048d164c205a967/mounts/shm
root@shylf-t-k8s-node-18:~ # fdisk -l
Disk /dev/sdb: 893.1 GiB, 958999298048 bytes, 1873045504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sda: 222.6 GiB, 238999830528 bytes, 466796544 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x0f794366

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sda1  *         2048  97656831  97654784  46.6G 83 Linux
/dev/sda2        97656832 121094143  23437312  11.2G 82 Linux swap / Solaris
/dev/sda3       121094144 466794495 345700352 164.9G 83 Linux


Disk /dev/loop0: 19.5 GiB, 20971520000 bytes, 40960000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 24.4 GiB, 26214400000 bytes, 51200000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 29.3 GiB, 31457280000 bytes, 61440000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/carina--vg--hdd-volume--pvc--c01289bc--5188--436b--90fc--6d32cab20fc1: 6 GiB, 6442450944 bytes, 12582912 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/carina--vg--hdd-mylv: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

About device registration

Is your feature request related to a problem?/Why is this needed
In early carina releases, disks were registered as a device on Node, but this was removed after Carina v0.10.0, with Nodecrd replacing it

Describe the solution you'd like in detail
Some partners in the community reported that carina-Scheduler could not be used due to multiple schedulers in their cluster. So hopefully there will still be the ability to register disk devices with the nodes so that the scheduling functions of the default scheduler can be used

Describe alternatives you've considered

So we have several options: ① device registration co-exists with Nodecrd; ② open a single scheduling plug-in project based on Webhook

Additional context

IO限流方面的问题

我看到 io-throttling 相关代码并没有设置 /sys/fs/cgroup/blkio/tasks 文件,我在 cgroupv1 上测试了下必须要设置该文件后才能进行 directIO限流,当时项目不设置该文件的考虑是什么呢?

const (
// pod annotation KubernetesCustomized/BlkIOThrottleReadBPS
KubernetesCustomized = "kubernetes.customized"
BlkIOThrottleReadBPS = "blkio.throttle.read_bps_device"
BlkIOThrottleReadIOPS = "blkio.throttle.read_iops_device"
BlkIOThrottleWriteBPS = "blkio.throttle.write_bps_device"
BlkIOThrottleWriteIOPS = "blkio.throttle.write_iops_device"
BlkIOCGroupPath = "/sys/fs/cgroup/blkio/"
)

throw some exception when kubelet terminate csi node container

What happened:
{"level":"info","ts":1659103499.6545281,"msg":"Stopping and waiting for non leader election runnables"}
{"level":"info","ts":1659103529.6545928,"msg":"Stopping and waiting for leader election runnables"}
{"level":"info","ts":1659103529.6546292,"msg":"Stopping and waiting for caches"}
{"level":"info","ts":1659103529.654654,"msg":"Stopping and waiting for webhooks"}
{"level":"info","ts":1659103529.6546695,"msg":"Wait completed, proceeding to shutdown the manager"}
{"level":"error","ts":1659103529.654631,"logger":"setup","msg":"problem running manager","error":"failed waiting for all runnables to end within grace period of 30s: context deadline exceeded","stacktrace":"github.com/carina-io/carina/cmd/carina-node/run.subMain\n\t/workspace/github.com/carina-io/carina/cmd/carina-node/run/run.go:163\ngithub.com/carina-io/carina/cmd/carina-node/run.glob..func1\n\t/workspace/github.com/carina-io/carina/cmd/carina-node/run/root.go:48\ngithub.com/spf13/cobra.(*Command).execute\n\t/workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:974\ngithub.com/spf13/cobra.(*Command).Execute\n\t/workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:902\ngithub.com/carina-io/carina/cmd/carina-node/run.Execute\n\t/workspace/github.com/carina-io/carina/cmd/carina-node/run/root.go:55\nmain.main\n\t/workspace/github.com/carina-io/carina/cmd/carina-node/main.go:29\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:255"}
{"level":"info","ts":1659103529.654709,"logger":"controller.nodestorageresource","msg":"Shutdown signal received, waiting for all workers to finish","reconciler group":"carina.storage.io","reconciler kind":"NodeStorageResource"}
panic: close of closed channel

goroutine 1 [running]:
github.com/carina-io/carina/cmd/carina-node/run.subMain()
/workspace/github.com/carina-io/carina/cmd/carina-node/run/run.go:165 +0xb88
github.com/carina-io/carina/cmd/carina-node/run.glob..func1(0x28b9440, {0x1920d0d, 0x3, 0x3})
/workspace/github.com/carina-io/carina/cmd/carina-node/run/root.go:48 +0x1e
github.com/spf13/cobra.(*Command).execute(0x28b9440, {0xc000138050, 0x3, 0x3})
/workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:856 +0x60e
github.com/spf13/cobra.(*Command).ExecuteC(0x28b9440)
/workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
/workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:902
github.com/carina-io/carina/cmd/carina-node/run.Execute()
/workspace/github.com/carina-io/carina/cmd/carina-node/run/root.go:55 +0x25
main.main()
/workspace/github.com/carina-io/carina/cmd/carina-node/main.go:29 +0x1c

What you expected to happen:
no any exception

How to reproduce it:
kubectl delete csinode-pod -n carina

add english docs

  • Default to English README
  • distinguish Chinese and English docs

carina存在引入 3 个有漏洞的缺陷组件

What happened:
carina存在引入 3 个有漏洞的缺陷组件:


缺陷名称 当前版本 最小修复版本 依赖类型 漏洞数量
github.com/dgrijalva/jwt-go v3.2.0+incompatible 4.0.0-preview1 直接依赖 1
github.com/satori/go.uuid v1.2.0   直接依赖 1
github.com/miekg/dns v1.0.14 1.1.25 直接依赖 1

Environment:
version v0.10.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.