Git Product home page Git Product logo

openebs / zfs-localpv Goto Github PK

View Code? Open in Web Editor NEW
394.0 28.0 98.0 31.54 MB

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend ZFS data storage stack.

Home Page: https://openebs.io

License: Apache License 2.0

Makefile 1.86% Shell 5.01% Dockerfile 1.08% Go 88.44% Mustache 0.92% Jinja 2.69%
zfs zfsonlinux kubernetes kubernetes-storage csi-driver openebs openebs-storage kubernetes-local-storage kubernetes-local-pv csi

zfs-localpv's Introduction

Welcome to OpenEBS

OpenEBS Welcome Banner

OpenEBS is a modern Block-Mode storage platform, a Hyper-Converged software Storage System and virtual NVMe-oF SAN (vSAN) Fabric that is natively integrates into the core of Kubernetes.

Try our Slack channel
If you have questions about using OpenEBS, please use the CNCF Kubernetes OpenEBS slack channel, it is open for anyone to ask a question

Important

OpenEBS provides...

  • Stateful persistent Dynamically provisioned storage volumes for Kubernetes
  • High Performance NVMe-oF & NVMe/RDMA storage transport optimized for All-Flash Solid State storage media
  • Block devices, LVM, ZFS, ext2/ext3/ext4, XFS, BTRFS...and more
  • 100% Cloud-Native K8s declarative storage platform
  • A cluster-wide vSAN block-mode fabric that provides containers/Pods with HA resilient access to storage across the entire cluster.
  • Node local K8s PV's and n-way Replciated K8s PV's
  • Deployable On-premise & in-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • Enterprise Grade data management capabilities such as snapshots, clones, replicated volumes, DiskGroups, Volume Groups, Aggregates, RAID

Type Storage Engine Type of data services Status In OSS ver
Replicated_PV Replicated data volumes (in a Cluster wide vSAN block mode fabric)
Replicated PV Mayastor Mayastor for High Availability deploymemnts distributing & replicating volumes across the cluster Stable, deployable in PROD
Releases
v4.0.1
 
Local PV Non-replicated node local data volumes (Local-PV has multiple variants. See below) v4.0.1
Local PV Hostpath Local PV HostPath for integration with local node hostpath (e.g. /mnt/fs1) Stable, deployable in PROD
Releases
v4.0.1
Local PV ZFS Local PV ZFS for integration with local ZFS storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV LVM2 Local PV LVM for integration with local LVM2 storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV Rawfile Local PV Rawfile for integration with Loop mounted Raw device-file filesystem Stable, deployable in PROD, undergoing evaluation & integration
release: v0.70
v4.0.1

STANDARD is optimized for NVMe and SSD Flash storage media, and integrates ultra modern cutting-edge high performance storage technologies at its core...

☑️   It uses the High performance SPDK storage stack - (SPDK is an open-source NVMe project initiated by INTEL)
☑️   The hyper modern IO_Uring Linux Kernel Async polling-mode I/O Interface - (fastest kernel I/O mode possible)
☑️   Native abilities for RDMA and Zero-Copy I/O
☑️   NVMe-oF TCP Block storage Hyper-converged data fabric
☑️   Block layer volume replication
☑️   Logical volumes and Diskpool based data managment
☑️   a Native high performance Blobstore
☑️   Native Block layer Thin provisioning
☑️   Native Block layer Snapshots and Clones

Get in touch with our team.

Vishnu Attur :octocat: @avishnu Admin, Maintainer
Abhinandan Purkait 😎 @Abhinandan-Purkait Maintainer
Niladri Halder 🚀 @niladrih Maintainer
Ed Robinson 🐶 @edrob999   CNCF Primary Liason
Special Maintainer
Tiago Castro @tiagolobocastro   Admin, Maintainer
David Brace @orville-wright     Admin, Maintainer

Activity dashbaord

Alt

Current status

Release Support Twitter/X Contrib License statue CI Staus
Releases Slack channel #openebs Twitter PRs Welcome FOSSA Status CII Best Practices

Read this in 🇩🇪 🇷🇺 🇹🇷 🇺🇦 🇨🇳 🇫🇷 🇧🇷 🇪🇸 🇵🇱 🇰🇷 other languages.

Deployment

  • In-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • On-Premise: Bare Metal, Virtualzied Hypervisor infra using VMWare ESXi, KVM/QEMU (K8s KubeVirt), Proxmox
  • Deployed as native K8s elemets: Deployments, Containers, Servcies, Stateful sets, CRD's, Sidecars, Jobs and Binaries all on K8s worker nodes.
  • Runs 100% in K8s userspace. So it's highly portable and run across many OS's & platforms.

Roadmap (as of June 2024)


OpenEBS Welcome Banner

QUICKSTART : Installation

NOTE: Depending on which of the 5 storage engines you choose to deploy, pre-requests that must be met. See detailed quickstart docs...


  1. Setup helm repository.
# helm repo add openebs https://openebs.github.io/openebs
# helm repo update

2a. Install the Full OpenEBS helm chart with default values.

  • This installs ALL OpenEBS Storage Engines* in the openebs namespace and chart name as openebs:
    Local PV Hostpath, Local PV LVM, Local PV ZFS, Replicated Mayastor
# helm install openebs --namespace openebs openebs/openebs --create-namespace

2b. To Install just the OpenEBS Replicated Mayastor Storage Engine, use the following command:

# helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace
  1. To view the chart
# helm ls -n openebs

Output:
NAME     NAMESPACE   REVISION  UPDATED                                   STATUS     CHART           APP VERSION
openebs  openebs     1         2024-03-25 09:13:00.903321318 +0000 UTC   deployed   openebs-4.0.1   4.0.1
  1. Verify installation
    • List the pods in namespace
    • Verify StorageClasses
# kubectl get pods -n openebs

Example Ouput:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-agent-core-674f784df5-7szbm               2/2     Running   0          11m
openebs-agent-ha-node-nnkmv                       1/1     Running   0          11m
openebs-agent-ha-node-pvcrr                       1/1     Running   0          11m
openebs-agent-ha-node-rqkkk                       1/1     Running   0          11m
openebs-api-rest-79556897c8-b824j                 1/1     Running   0          11m
openebs-csi-controller-b5c47d49-5t5zd             6/6     Running   0          11m
openebs-csi-node-flq49                            2/2     Running   0          11m
openebs-csi-node-k8d7h                            2/2     Running   0          11m
openebs-csi-node-v7jfh                            2/2     Running   0          11m
openebs-etcd-0                                    1/1     Running   0          11m
openebs-etcd-1                                    1/1     Running   0          11m
openebs-etcd-2                                    1/1     Running   0          11m
...
# kubectl get sc

Example Output:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-localpv-provisioner-6ddf7c7978-jsstg      1/1     Running   0          3m9s
openebs-lvm-localpv-controller-7b6d6b4665-wfw64   5/5     Running   0          3m9s
openebs-lvm-localpv-node-62lnq                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-lhndx                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-tlcqv                    2/2     Running   0          3m9s
openebs-zfs-localpv-controller-f78f7467c-k7ldb    5/5     Running   0          3m9s
...

For more details, please refer to OpenEBS Documentation.

CNCF logo OpenEBS is a CNCF project and DataCore, Inc is a CNCF Silver member. DataCore support's CNCF extensively and has funded OpenEBS participating in every KubeCon event since 2020. Our project team is managed under the CNCF Storage Landscape and we contribute to the CNCF CSI and TAG Storage project initiatives. We proudly support CNCF Cloud Native Community Groups initiatives.

Project updates, subscribe to OpenEBS Announcements
Interacting with other OpenEBS users, subscribe to OpenEBS Users


Container Storage Interface group Storage Technical Advisory Group     Cloud Native Community Groups

Commercial Offerings

Commerically supported deployments of openEBS are avaialble via key companies. (Some provide services, funding, technology, infra, rescourced to the openEBS proejct).

(openEBS OSS is a CNCF project. CNCF does not endorse any specific company).

zfs-localpv's People

Contributors

abhinandan-purkait avatar ajeetrai7 avatar akhilerm avatar asquare14 avatar avishnu avatar dependabot[bot] avatar fossabot avatar hrudaya21 avatar icedroid avatar jnels124 avatar kmova avatar niladrih avatar nsathyaseelan avatar orville-wright avatar pando85 avatar pawanpraka1 avatar prateekpandey14 avatar praveengt avatar sbidoul avatar shubham14bajpai avatar sinhaashish avatar soniasingla avatar sonicaj avatar surajssd avatar t3hmrman avatar tathougies avatar trunet avatar vaniisgh avatar vharsh avatar w3aman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zfs-localpv's Issues

driver registration doesn't work on microk8s snap

Registration is sensibly targeted to /var/lib/kubelet, but the directory actually used by microk8s snap is /var/snap/microk8s/common/var/lib/kubelet. I resolved this by symlinking the actual location to the expected location.

Pod creation problems due to slow mounting/attaching

We have 28 total nodes (3 masters, 25 worker)
Each node has 48 cores, 252GB ram

~228 zfs volumes (305 including old path volumes)
~212 volumeattachments

    I0409 17:26:53.814862 1 operation_generator.go:359] AttachVolume.Attach succeeded for volume "pvc-f029b12f-92d6-44fa-aa28-33a64f420b34" (UniqueName: "kubernetes.io/csi/zfs.csi.openebs.io^pvc-f029b12f-92d6-44fa-aa28-33a64f420b34") from node "halfling-mexico"```

Volumes provisioned by the zfs provider all seem to take >1hr to complete the mounting of a volume. We haven't seen any throttling logs form the controller manager.

Happy to provide any additional information.

readme need to be updated

What steps did you take and what happened:

I tried to apply the zfs-operator file on k8s version 1.15. but yaml was not applying giving out the error.

namespace/openebs created
customresourcedefinition.apiextensions.k8s.io/zfsvolumes.zfs.openebs.io created
customresourcedefinition.apiextensions.k8s.io/zfssnapshots.zfs.openebs.io created
error: error validating "https://raw.githubusercontent.com/openebs/zfs-localpv/master/deploy/zfs-operator.yaml": error validating data: ValidationError(CSIDriver.spec): unknown field "volumeLifecycleModes" in io.k8s.api.storage.v1beta1.CSIDriverSpec; if you choose to ignore these errors, turn validation off with --validate=false

What did you expect to happen:

As readme says supported on k8s 1.14+ but looks like volumeLIfecycleModes field works only after k8s 1.16
And
OS support is also mentioned only ubuntu 18.04 but i could see CentOS 7/8 support is now available.

So either have to fix the above error or update the readme accordingly.
say,
k8s : 1.16+ and OS: ubuntu 18.04, 16.04, centos 7/8
OR
fix this issue and k8s: 1.14+ and OS: ubuntu 18.04, 16.04, centsos 7/8

Snapshot data is different after restoring the zfs-localpv backup

What happened:

I provisioned the zfs-localpv and busybox application is consuming it. I dumped some data and after that i took the volume snapshot. So suppose i took the snapshot S1 of data D1.
Now more data is dumped say D2. Now volume is having D1+D2 data in total.
After that backup is created. So it should be consisiting of total data of D1+D2 and one volume snapshot S1 of D1 data.

When backup is restored in some different namespace, Application mount point have the total data and we have successfully restored the vol-snapshot S1 also in that namespace.
But when we try to clone that snapshot, my clone volume is having data total D1+D2 data but clone should be consisting of data on which snapshot was taken i.e. data D1. it should not have the data which was dumped after taking the snapshot.

Here looks like while taking the backup snapshot was backed up with whole data upto the backup start time, instead of only that data for which snapshot was created.

add support to restore in a cluster with different nodes

Right now there is a limitation where we can restore in a cluster which should have same nodes (names only) present. We can add a config map where node mappings are given and while restore time we can use this to map the nodes.

apiVersion: v1
kind: ConfigMap
metadata:
  # any name can be used; Velero uses the labels (below)
  # to identify it rather than the name
  name: change-pvc-node-selector-config
  # must be in the velero namespace
  namespace: velero
  # the below labels should be used verbatim in your
  # ConfigMap.
  labels:
    # this value-less label identifies the ConfigMap as
    # config for a plugin (i.e. the built-in restore item action plugin)
    velero.io/plugin-config: ""
    # this label identifies the name and kind of plugin
    # that this ConfigMap is for.
    velero.io/change-pvc-node: RestoreItemAction
data:
  # add 1+ key-value pairs here, where the key is the old
  # node name and the value is the new node name.
  <old-node-name>: <new-node-name>

Refactor ZFS-LocalPV projects to use go-module

Description

Go has included support for versioned modules and users are encouraged to migrate to modules from other dependency management systems. It is a relatively easier system for package management and would be nice to have.

Context

Here is the wiki with the details on Go Modules
an official go blog post
and a simple medium article to understand a bit better.

quota is not set for the cloned volume in case of zfs file system

what happened:
Deploy application with pvc of 2Gi. After exec into the application if we do df -h we see the correct claimsize on mount point.

Filesystem                                          Size  Used Avail Use% Mounted on
overlay                                              98G  9.3G   84G  10% /
tmpfs                                                64M     0   64M   0% /dev
tmpfs                                               3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                                            98G  9.3G   84G  10% /etc/hosts
shm                                                  64M     0   64M   0% /dev/shm
test-pool/pvc-6c79f1f9-dc91-408e-b0e1-aa9120dae6b3  2.0G  215M  1.8G  11% /var/lib/mysql
tmpfs                                               3.9G   12K  3.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                               3.9G     0  3.9G   0% /proc/acpi
tmpfs                                               3.9G     0  3.9G   0% /proc/scsi
tmpfs                                               3.9G     0  3.9G   0% /sys/firmware

But when tried to clone the volume and used it, exec into application show df -h as some different size which is not the clone volume size. In my case it is showing the zpool capacity.

root@percona1-7c776f8558-6cskw:/# df -h
Filesystem                                          Size  Used Avail Use% Mounted on
overlay                                              98G  9.3G   84G  10% /
tmpfs                                                64M     0   64M   0% /dev
tmpfs                                               3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                                            98G  9.3G   84G  10% /etc/hosts
shm                                                  64M     0   64M   0% /dev/shm
test-pool/pvc-fe8c7583-9896-4ae8-9e59-0393f0858279  9.7G  215M  9.4G   3% /var/lib/mysql
tmpfs                                               3.9G   12K  3.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                               3.9G     0  3.9G   0% /proc/acpi
tmpfs                                               3.9G     0  3.9G   0% /proc/scsi
tmpfs                                               3.9G     0  3.9G   0% /sys/firmware

Note that:
This above scenario is with only zfs file system. Didn't see any size mismatch with ext4/xfs file system

volume is not mounting for xfs file-system on CentOS 7 cluster

What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.]

Volume is not getting mounted in case of xfs-filesystem on CentOS based cluster, giving this error:

Events:
  Type     Reason       Age                    From                             Message
  ----     ------       ----                   ----                             -------
  Normal   Scheduled    6m48s                  default-scheduler                Successfully assigned default/app-busybox-84869957c7-j4hpc to d2iq-node1.mayalabs.io
  Warning  FailedMount  4m45s                  kubelet, d2iq-node1.mayalabs.io  Unable to mount volumes for pod "app-busybox-84869957c7-j4hpc_default(be9b5315-edd9-40fd-bb64-eeb0f1a18740)": timeout expired waiting for volumes to attach or mount for pod "default"/"app-busybox-84869957c7-j4hpc". list of unmounted volumes=[data-vol]. list of unattached volumes=[data-vol default-token-nf4rn]
  Warning  FailedMount  4m37s (x9 over 6m48s)  kubelet, d2iq-node1.mayalabs.io  MountVolume.SetUp failed for volume "pvc-b8914c43-2a53-49aa-b43f-f6211f029aba" : rpc error: code = Internal desc = rpc error: code = Internal desc = not able to format and mount the zvol

Logs from the node agent pod shows this:

I0703 11:40:21.581453       1 mount_linux.go:454] `fsck` error fsck from util-linux 2.34
fsck.ext2: Bad magic number in super-block while trying to open /dev/zd0
/dev/zd0: 
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

and

E0703 11:42:32.350511       1 mount.go:21] zfspv: failed to mount volume /dev/zvol/test/pvc-b8914c43-2a53-49aa-b43f-f6211f029aba [xfs] to /var/lib/kubelet/pods/be9b5315-edd9-40fd-bb64-eeb0f1a18740/volumes/kubernetes.io~csi/pvc-b8914c43-2a53-49aa-b43f-f6211f029aba/mount, error mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t xfs -o defaults /dev/zvol/test/pvc-b8914c43-2a53-49aa-b43f-f6211f029aba /var/lib/kubelet/pods/be9b5315-edd9-40fd-bb64-eeb0f1a18740/volumes/kubernetes.io~csi/pvc-b8914c43-2a53-49aa-b43f-f6211f029aba/mount
Output: mount: /var/lib/kubelet/pods/be9b5315-edd9-40fd-bb64-eeb0f1a18740/volumes/kubernetes.io~csi/pvc-b8914c43-2a53-49aa-b43f-f6211f029aba/mount: wrong fs type, bad option, bad superblock on /dev/zd0, missing codepage or helper program, or other error.

Environment:

  • ZFS-LocalPV version: ci image
  • OS (e.g. from /etc/os-release):
[root@d2iq-master ~]# cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Support for deploying to CentOS

Description

The zfs-Operator manifest is incompatible with ZFS. After deployment some hostPath references to libraries fail.

Warning  FailedMount  20m (x5 over 20m)    kubelet, localhost.localdomain  MountVolume.SetUp failed for volume "libzpool" : hostPath type check failed: /lib/libzpool.so.2.0.0 is not a file
  Warning  FailedMount  20m (x5 over 20m)    kubelet, localhost.localdomain  MountVolume.SetUp failed for volume "libnvpair" : hostPath type check failed: /lib/libnvpair.so.1.0.1 is not a file

Context

I use CentOS/RHEL in production not Ubuntu.

Possible Solution

This manifest with modified libraries was sent to me via Slack and is deploying successfully.

reduce the log noise for ZFS-LocalPV

Describe the problem/challenge you have

ZFS-LocalPV logs every grpc request. The volume stats log gets triggered every minute and pollute the log.

Describe the solution you'd like

We can remove the unnecessary logs to reduce the noise. One of log which is not needed is GetvolumeStats grpc log, which can be removed.

This the place where all grpc calls are intercepted

https://github.com/openebs/zfs-localpv/blob/master/pkg/driver/grpc.go#L49

Here, we are printing everything, which is not needed, we get this log every minute :

time="2020-06-19T11:30:28Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-06-19T11:30:28Z" level=info msg="GRPC request: {}"
time="2020-06-19T11:30:28Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}},{\"Type\":{\"Rpc\":{\"type\":3}}}]}"
time="2020-06-19T11:30:28Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetVolumeStats"
time="2020-06-19T11:30:28Z" level=info msg="GRPC request: {\"volume_id\":\"pvc-794118b5-143b-49a1-99e8-4c2b80a74776\",\"volume_path\":\"/var/lib/kubelet/pods/3afb1e57-47db-4f22-948c-273fe5c53633/volumes/kubernetes.io~csi/pvc-794118b5-143b-49a1-99e8-4c2b80a74776/mount\"}"
time="2020-06-19T11:30:28Z" level=info msg="GRPC response: {\"usage\":[{\"available\":4294836224,\"total\":4294967296,\"unit\":1,\"used\":131072},{\"available\":8388560,\"total\":8388566,\"unit\":2,\"used\":6}]}"

moving ZFSPV API version to v1

ZFSPV is using v1alpha1 apiversion for the CRDs. Now since we are moving it to beta, we should change the api version to v1 for zfsvolume and zfssnapshot CRDs.

Auto provisioning with k3s

I am trying to use k3s to deploy the zfs-localpv but it seems like everytime I try to deploy https://github.com/openebs/zfs-localpv/blob/master/deploy/sample/fio.yaml I have to manually create the dataset on the zpool. I'm not sure if this is due to k3s removing most of the extra bits k8s have in it or not.

I also tried changing the provisioner to zfs-localpv vs zfs.csi.openebs.io and that didn't change anything. If this should work then I can try to debug further with my configuration.

pod restarts on a loaded setup seen to cause data loss.

I have 200s of zfs-Local persistent volumes. After restarting the application pods in some of the pods (in my case 4) data is lost which i dumped in starting.
When i retried the whole scenario i didn't hit this issue at this point where i am restarting the pods with parent volumes. I have the snapshot for the volumes. when i cloned the volume and restarted all application pods (consuming parent volume + cloned volumes) some of the pod which were of cloned volumes (in my case only one) hitted the issue of data being lost.

After digging the setup found this.
43212 time="2020-04-21T17:01:06Z" level=error msg="zfspv: failed to remove mount path Error: unlinkat /var/lib/kubelet/pods/2f9fe907-53b9-46c0-9b9b-d6bed9467836/volumes/ kubernetes.io~csi/pvc-8120d443-c6a7-4f2a-9240-05deb3e05850/mount: device or resource busy"

Looks like when we unmounted the volume,but meanwhile system umount is in progress, which we are not aware, and we proceeded to delete the mount path, which deleted the files there.

Note: if any further info required , let me know !

clone creation is failing on CentOS when fstype is xfs

zfspv-clone getting failed on centos based cluster when fstype is used as xfs
getting such error:

time="2020-07-15T09:10:51Z" level=error msg="zfspv: failed to mount volume /dev/zvol/zfs-test-pool/pvc-05e22f2a-9901-40ba-817d-82a43a922713 [xfs] to /var/lib/kubelet/pods/341b2fca-8430-45d1-9009-375a81588210/volumes/kubernetes.io~csi/pvc-05e22f2a-9901-40ba-817d-82a43a922713/mount, error mount failed: exit status 32\nMounting command: mount\nMounting arguments: -t xfs -o defaults /dev/zvol/zfs-test-pool/pvc-05e22f2a-9901-40ba-817d-82a43a922713 /var/lib/kubelet/pods/341b2fca-8430-45d1-9009-375a81588210/volumes/kubernetes.io~csi/pvc-05e22f2a-9901-40ba-817d-82a43a922713/mount\nOutput: mount: /var/lib/kubelet/pods/341b2fca-8430-45d1-9009-375a81588210/volumes/kubernetes.io~csi/pvc-05e22f2a-9901-40ba-817d-82a43a922713/mount: wrong fs type, bad option, bad superblock on /dev/zd16, missing codepage or helper program, or other error.\n\n"

and from the dmesg log it says :

[ 6421.004650] XFS (zd16): Filesystem has duplicate UUID 3af6518b-8f14-4585-b513-949053e63a4c - can't mount
[ 6421.603229] XFS (zd16): Filesystem has duplicate UUID 3af6518b-8f14-4585-b513-949053e63a4c - can't mount
[ 6422.705286] XFS (zd16): Filesystem has duplicate UUID 3af6518b-8f14-4585-b513-949053e63a4c - can't mount
[ 6424.812780] XFS (zd16): Filesystem has duplicate UUID 3af6518b-8f14-4585-b513-949053e63a4c - can't mount
[ 6428.937560] XFS (zd16): Filesystem has duplicate UUID 3af6518b-8f14-4585-b513-949053e63a4c - can't mount
[ 6437.072385] XFS (zd16): Filesystem has duplicate UUID 3af6518b-8f14-4585-b513-949053e63a4c - can't mount
[ 6453.143135] XFS (zd16): Filesystem has duplicate UUID 3af6518b-8f14-4585-b513-949053e63a4c - can't mount
[ 6485.287202] XFS (zd16): Filesystem has duplicate UUID 3af6518b-8f14-4585-b513-949053e63a4c - can't mount

zfs version: 0.8.0
konvoy cluster on centos 7

Add golint to check to travis

Describe the problem/challenge you have
Add a golint check to travis to make sure that contributions are complaint with linting standards.

Describe the solution you'd like

  • fix existing go-lint warnings
  • add make golint to the travis.yaml

do not attempt to delete the ZFSVolume CR if there is a snapshot holding it.

ZFSPV allows to delete the volume if snapshot exist for that volume.In this case the node agent keeps on trying to delete the volume and fails to do so since there is a snapshot for the volume, but the PVC and PV get deleted.

Here, we can fail the ZFSVolume deletion, if there is a snapshot on it. PVC will still be deleted and in background the CSI Provisioner will keep on trying to delete the volume and we keep on failing that request until there is a snapshot on it.

Before attempting the delete here :

err = zfs.DeleteVolume(volumeID)

We should check for snapshot and if it exist, fail the deletion.

open ebs node installation failling with registration error

Hi,

I followed the following article https://openebs.io/blog/openebs-dynamic-volume-provisioning-on-zfs/ to install zfs-localpv in my K8s cluster.

It keeps failing with the following error.

│ csi-node-driver-registrar I0429 16:06:11.322451       1 main.go:110] Version: v1.2.0-0-g6ef000ae                                                                                                         │
│ csi-node-driver-registrar I0429 16:06:11.322531       1 main.go:120] Attempting to open a gRPC connection with: "/plugin/csi.sock"                                                                       │
│ csi-node-driver-registrar I0429 16:06:11.322545       1 connection.go:151] Connecting to unix:///plugin/csi.sock                                                                                         │
│ csi-node-driver-registrar I0429 16:06:11.322870       1 main.go:127] Calling CSI driver to discover driver name                                                                                          │
│ csi-node-driver-registrar I0429 16:06:11.323255       1 connection.go:180] GRPC call: /csi.v1.Identity/GetPluginInfo                                                                                     │
│ csi-node-driver-registrar I0429 16:06:11.323287       1 connection.go:181] GRPC request: {}                                                                                                              │
│ csi-node-driver-registrar I0429 16:06:11.326587       1 connection.go:183] GRPC response: {"name":"zfs.csi.openebs.io","vendor_version":"master-02bc587:04-28-2020"}                                     │
│ csi-node-driver-registrar I0429 16:06:11.327524       1 connection.go:184] GRPC error: <nil>                                                                                                             │
│ csi-node-driver-registrar I0429 16:06:11.327544       1 main.go:137] CSI driver name: "zfs.csi.openebs.io"                                                                                               │
│ csi-node-driver-registrar I0429 16:06:11.327615       1 node_register.go:58] Starting Registration Server at: /registration/zfs.csi.openebs.io-reg.sock                                                  │
│ csi-node-driver-registrar I0429 16:06:11.327798       1 node_register.go:67] Registration Server started at: /registration/zfs.csi.openebs.io-reg.sock                                                   │
│ csi-node-driver-registrar I0429 16:06:11.471563       1 main.go:77] Received GetInfo call: &InfoRequest{}                                                                                                │
│ csi-node-driver-registrar I0429 16:06:12.471906       1 main.go:77] Received GetInfo call: &InfoRequest{}                                                                                                │
│ csi-node-driver-registrar I0429 16:06:12.821947       1 main.go:87] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration │
│  failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: detected topology value collision: driver reported "kuberne │
│ tes.io/hostname":"ip-10-99-90-189.ap-south-1.compute.internal" but existing label is "kubernetes.io/hostname":"ip-10-99-90-189",}                                                                        │
│ csi-node-driver-registrar E0429 16:06:12.822033       1 main.go:89] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: error updating Node object with │
│  CSI driver node info: error updating node: timed out waiting for the condition; caused by: detected topology value collision: driver reported "kubernetes.io/hostname":"ip-10-99-90-189.ap-south-1.comp │
│ ute.internal" but existing label is "kubernetes.io/hostname":"ip-10-99-90-189", restarting registration container.                                                                                       │
│ csi-node-driver-registrar stream closed

I haven't modified anything. Using the vanilla config

pvc is getting bound even if the zpool name is given wrong

What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.]

A storage class was created with wrong zpool name. Now if we try to apply the pvc yaml with this storage class it gets bound status.
in kubectl get zv -n openebs command zpool name is also there which is the wrong one. I don't have any zpool with that name.

What did you expect to happen:
controller should verifiy if volume is really created or not. if not then pvc should not bound as we dont have that pool in which zfs dataset will be created.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Support R[OW]X for dataset

Describe the problem/challenge you have
[A description of the current limitation/problem/challenge that you are experiencing.]
I cannot mount ZFS dataset to multiple pods, thus for some reason I cannot run multiple game servers using Agones, because I want to re-use the same volume to say e.g. run multiple instances of the game server with same set of plugin like Retake, TTT and such.

My current workaround is to generate multiple snapshots and let each future instance take the snapshot as cloned PVC and stick it from then on.

Describe the solution you'd like
[A clear and concise description of what you want to happen.]
Allow ZFS dataset to be R[OW]X, and also state this exception in the document.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The reason to allow dataset only is that it's not an exclusive block device. Obviously, ZVOL are still forbidden.

I know there could have had Read-Write conflicts, but then I think how could people run things off NFS so nice all this years? They obviously have designed their application to have conflict-avoidance.

Similarly, we should also state that this is dangerous and requires application change to accommodate this.

move to klog from logrus

Describe the problem/challenge you have

Currently the project uses logrus as logging package.

Describe the solution you'd like

Start using klog and migrate older logs to klog.

Volume is not able to mount/attach in case of "xfs" file system due to duplicate UUID

what happened:
i provisioned volume with fstype as "xfs" in storageclass. Application was succesfully deployed. When i tried to clone the volume, pvc gets bound but application stuck in container creating state.
Giving error as below:

  Type     Reason                  Age                   From                     Message
  Warning  FailedScheduling        <unknown>             default-scheduler        Failed to bind volumes: pv "pvc-aeadf930-2f7f-4e66-94d3-bd4366e0550b" node affinity doesn't match node "e2e1-node3": No matching NodeSelectorTerms
  Normal   Scheduled               <unknown>             default-scheduler        Successfully assigned aman/percona1-7c776f8558-l2vxk to e2e1-node1
  Warning  FailedScheduling        <unknown>             default-scheduler        AssumePod failed: pod 300233c3-1de6-42a4-87d2-82bba6c284a3 is in the cache, so can't be assumed
  Normal   SuccessfulAttachVolume  5m17s                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-aeadf930-2f7f-4e66-94d3-bd4366e0550b"
  Warning  FailedMount             3m14s                 kubelet, e2e1-node1      Unable to attach or mount volumes: unmounted volumes=[data-vol], unattached volumes=[data-vol default-token-tjqfq]: timed out waiting for the condition
  Warning  FailedMount             62s (x10 over 5m13s)  kubelet, e2e1-node1      MountVolume.SetUp failed for volume "pvc-aeadf930-2f7f-4e66-94d3-bd4366e0550b" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = rpc error: code = Internal desc = not able to format and mount the zvol
  Warning  FailedMount             58s                   kubelet, e2e1-node1      Unable to attach or mount volumes: unmounted volumes=[data-vol], unattached volumes=[default-token-tjqfq data-vol]: timed out waiting for the condition

Storage class yaml is used as below:

kind: StorageClass
metadata:
  name: zfs-sc
allowVolumeExpansion: true
parameters:
  volblocksize: "4k"
  compression: "off"
  dedup: "off"
  fstype: "xfs"
  poolname: "test-pool"
provisioner: zfs.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer

Because of duplicate UUID volume mount is getting failed in case xfs file system.
To avoid this mountOption as nouuid can be used, something like this as it worked for me:

kind: StorageClass
metadata:
  name: openebs-zfspv1
allowVolumeExpansion: true
parameters:
  fstype: "xfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer
mountOptions:
  - nouuid

Mismatch between CRs and ZFS volumes on node

What steps did you take and what happened:
Currently running 0.6.1, noticed that the on disk ZFS results didn't match with the results returned from kubectl get zfsvolumes -A

What did you expect to happen:
1:1 matching

The output of the following commands will help us better understand what's going on:
zfs list -t all

local/pv/pvc-086ec32f-cce0-4009-8948-1ab6d4ccb839                                             7.59G   592G     7.59G  /var/lib/kubelet/pods/c6fa7984-2760-47f7-a105-3f2aa079db3e/volumes/kubernetes.io~csi/pvc-086ec32f-cce0-4009-8948-1ab6d4ccb839/mount
local/pv/pvc-370d7921-6a33-4336-ab85-2ddd789c576f                                             4.84M  10.0G     4.84M  /var/lib/kubelet/pods/64222e8a-8690-450b-8bc5-58868d8aa6bd/volumes/kubernetes.io~csi/pvc-370d7921-6a33-4336-ab85-2ddd789c576f/mount
local/pv/pvc-e1275578-6abd-4401-8402-9eb1298cfdb7                                             11.4G  88.6G     11.4G  /var/lib/kubelet/pods/e70a4494-14e7-481a-8d0c-424d20d1d876/volumes/kubernetes.io~csi/pvc-e1275578-6abd-4401-8402-9eb1298cfdb7/mount
local/pv/pvc-e3f70edf-9479-4e0b-bc21-0b00ee426307                                              204K  1024M      204K  /var/lib/kubelet/pods/7cf7f87d-fc90-4c21-a1c8-e7a605cf16bf/volumes/kubernetes.io~csi/pvc-e3f70edf-9479-4e0b-bc21-0b00ee426307/mount
❯ kubectl get zfsvolumes.zfs.openebs.io -A | rg textile
openebs     pvc-04230116-1719-4fc9-87e9-6bcbbb1440ac   local/pv   halfling-textile   8589934592                                 zfs                         112d
openebs     pvc-086ec32f-cce0-4009-8948-1ab6d4ccb839   local/pv   halfling-textile   644245094400                               zfs                         11d
openebs     pvc-370d7921-6a33-4336-ab85-2ddd789c576f   local/pv   halfling-textile   10737418240                                zfs                         43d
openebs     pvc-66cb5b8b-f446-414d-b451-8ddc66a1cdfe   local/pv   halfling-textile   8589934592                                 zfs                         112d
openebs     pvc-e1275578-6abd-4401-8402-9eb1298cfdb7   local/pv   halfling-textile   107374182400                               zfs                         43d
openebs     pvc-e3f70edf-9479-4e0b-bc21-0b00ee426307   local/pv   halfling-textile   1073741824                                 zfs                         29d

ZFSVolumes.yaml: https://gist.github.com/spencergilbert/c4008694b7b19f224a3d7c3df535286d

openebs-zfs-plugin logs:
https://gist.github.com/spencergilbert/21a5b84bbdc08962edbfd18093ff5ba0

Environment:

  • ZFS-LocalPV version: 0.6.1
  • Kubernetes version (use kubectl version): 1.18.6
  • Kubernetes installer & version: rancher 2.4.5
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): ubuntu 18.04

add events to ZFSVolume CR with success and failure messages.

Describe the problem/challenge you have

When volume creation fails, we have to go the Node agent and check the logs to know the failure reason.

Describe the solution you'd like

It will be good to add event to the ZFSVolume CR with success event and failure event with the reason.

Code Block

This is the place where volume gets created, we need to add events to the ZV object. same needs to be done for snapshot also.

err = zfs.CreateVolume(zv)

err = zfs.CreateSnapshot(snap)

CreateZFSVolume does not respect fsType=zfs -> dataset

What steps did you take and what happened:
With storageclass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: zfs-dataset
allowVolumeExpansion: true
parameters:
  fsType: "zfs"
provisioner: zfs.csi.openebs.io

the created datasets end up with type volume.

What did you expect to happen:

With fsType set, I'd like to have a dataset.

Anything else you would like to add:
The parameter req.GetParameters()["fstype"] in func CreateZFSVolume(req *csi.CreateVolumeRequest) (string, error) is inconsistent with the CRD. Either it is fstype in the CRD or it is req.GetParameters()["fsType"].

BTRFS support in fsType

Describe the problem/challenge you have
I'm deploying Concourse CI with zfs-localpv and for optimal performance, I would like to use btrfs zvols (alternatives aren't really viable - overlay does not work on top of ZFS, naive is very slow and btrfs loopback device leaks /dev/loop* devices)

Describe the solution you'd like
Adding tools for creating btrfs formatted zvols

Anything else you would like to add:

Environment:
(irrelevant)

Invest on ZFS + NFS combination

Ditto as title, we could use ZFS and NFS with great experience.
Conceptual code:

# # Creates a ZFS dataset that automatically shares a NFS mount on /rpool/openebs/zfs-root
# zfs create -o sharenfs="@10.42.0.0/16,no_root_squash" rpool/openebs/zfs-root
# # Upsert any CIDRs required on the sharenfs options.

Use ZFS dataset instead of ZPOOL

Describe the problem/challenge you have

Hello. I have been exploring storage solutions available in for k8s using ZFS and came across this. It appears that in order to use zfs-localpv, we require a dedicated ZPOOL on all nodes. Our use case is basically a single node cluster where we are using k3s. It would be great if I could have it confined to use a zfs dataset instead of specifying a zpool.

I am not sure if we already have this functionality, in that case can you please point me to the docs ?

Describe the solution you'd like

It would be great if we are able to specify a zfs dataset instead of zpool and then all the magic done by zfs-localpv happens under that zfs dataset. Right now it uses a root dataset where it creates children and manages them.

Environment:

  • ZFS-LocalPV version
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
    K3s
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

ZFSVolume resource ramains stale in system after deleting pvc

What steps did you take and what happened:

  • Deletion of Pending pvc is not cleaning up the zfsvolume resource which still remains in pending state.
devuser@rack2:~$ kubectl get pvc --all-namespaces
No resources found.
devuser@rack2:~$ kubectl get zv -n openebs
NAME                                       ZPOOL   NODE        SIZE         STATUS    FILESYSTEM   AGE
pvc-c35668aa-5415-4efc-9d4f-403d262947b8   aman    k8s-node1   4294967296   Pending   zfs          12m

Volume should not be provisioned on the nodes like master node which are marked as noScheduled

what happened:

  • zpool was created on all of the worker nodes where application pods can be scheduled.
  • When volume was provisioned PV scheduling was created on master node which in general marked as a noSchedule node. Because of that PV which is created on master node, application pod remains in Pending state as it doesn't get volume.
  • FYI in storage class spec under allowedTopologies field list of nodes, where zpool is present, was intentionally not used as all the worker node are having zpools.

what you expected to happen

  • PV should not be created on master nodes. For that zfs provisioner should be aware of such info like NoScheduling.

Current quay.io/openebs/zfs-driver:ci image is not provisioning on CentOS 8

What steps did you take and what happened:
cleaned up all openebs config in cluster, remove all zfs based pvc and reinstalled using the README

What did you expect to happen:
expected to be able to run 'zfs' inside openebs-zfs-plugin container as per previous install

The output of the following commands will help us better understand what's going on:

zfs: error while loading shared libraries: libtirpc.so.3: cannot open shared object file: No such file or directory

Anything else you would like to add:
I have tried with the yaml from "Support for deploying to CentOS #119" but this still failed due to changing libs in Centos8

Environment:

  • ZFS-LocalPV version
    v0.9.2

  • Kubernetes version (use kubectl version):
    Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes installer & version:
    as above

  • Cloud provider or hardware configuration:

bear metal,
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node-01 Ready 104d v1.18.3 10.1.1.221 CentOS Linux 8 (Core) 4.18.0-147.8.1.el8_1.x86_64 docker://19.3.12

vm-host.globelock.home Ready master 104d v1.18.3 10.1.1.220 CentOS Linux 8 (Core) 4.18.0-193.14.2.el8_2.x86_64 docker://19.3.12

  • OS (e.g. from /etc/os-release):
    NAME="CentOS Linux"
    VERSION="8 (Core)"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="8"
    PLATFORM_ID="platform:el8"
    PRETTY_NAME="CentOS Linux 8 (Core)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:centos:centos:8"
    HOME_URL="https://www.centos.org/"
    BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="8"

mount point does not exist

openebs-zfs-node openebs-zfs-plugin says:

time="2020-05-13T15:35:14Z" level=info msg="GRPC request: {"target_path":"/var/snap/microk8s/common/var/lib/kubelet/pods/e133fcad-a7dd-459d-8bad-c555faf1170d/volumes/kubernetes.iocsi/pvc-b3f312c4-0f1d-4348-8196-dcb19bec206c/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"openebs.io/poolname":"lg-data","storage.kubernetes.io/csiProvisionerIdentity":"1589384112805-8081-zfs.csi.openebs.io"},"volume_id":"pvc-b3f312c4-0f1d-4348-8196-dcb19bec206c"}"
E0513 15:35:14.275371 1 mount_linux.go:147] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o defaults /dev/zvol/lg-data/pvc-b3f312c4-0f1d-4348-8196-dcb19bec206c /var/snap/microk8s/common/var/lib/kubelet/pods/e133fcad-a7dd-459d-8bad-c555faf1170d/volumes/kubernetes.io
csi/pvc-b3f312c4-0f1d-4348-8196-dcb19bec206c/mount
Output: mount: /var/snap/microk8s/common/var/lib/kubelet/pods/e133fcad-a7dd-459d-8bad-c555faf1170d/volumes/kubernetes.io~csi/pvc-b3f312c4-0f1d-4348-8196-dcb19bec206c/mount: mount point does not exist.
E0513 15:35:14.497336 1 mount_linux.go:147] Mount failed: exit status 32

Environment:

  • ZFS-LocalPV dd059a2
  • Kubernetes 1.18.2
  • Kubernetes microk8s 1.18
  • OS Ubuntu 16.04.6 by way of multipass

Cannot create ZFS volume due to outdated glibc

could not create volume rpool/k3s/openebs/pvc-9c375c67-4858-481e-ace4-83820f7221df cmd [create -o quota=4294967296 -o recordsize=4k -o dedup=off -o compression=on -o mountpoint=none rpool/k3s/openebs/pvc-9c375c67-4858-481e-ace4-83820f7221df] error: zfs: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /lib/libzfs.so.2)\n"
E0321 13:03:40.406946       1 volume.go:252] error syncing

I'm running on the latest Proxmox 6 with pre-populated data

Deletion of zfspv-snapshot stucked when cloned-pvc is not in bound state (i.e. pendig state)

Description

Created the clone from the snapshot of zfspv. But this cloned-pvc is in pending state due to volumeBinding mode is set as waitForFirstConsumer.
Now if we try to delete snapshot while clone-pvc is in pending this deletion task gets stuck. Not able to delete the snapshot.

Expected Behavior

Snapshot deletion should not stuck when pvc is in pending state.

Current Behavior

Snapshot deletion is failing only when clone-pvc is in pending state. when clone-pvc is bound and we try to delete the snapshot , that is working fine. Able to delete the snapshot CR

zfs-localpv installation

I get errors after zfs-localpv installation on ubuntu 18.04.3.

Nov  4 08:41:01 kuber1 containerd[1168]: time="2019-11-04T08:41:01.360357115Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/28b3094c2c5b29608258e41e50438880c06aec988e4457779014c1becd2c6bf4/shim.sock" debug=false pid=6083
Nov  4 08:41:02 kuber1 kubelet[1130]: I1104 08:41:02.186118    1130 operation_generator.go:193] parsed scheme: ""
Nov  4 08:41:02 kuber1 kubelet[1130]: I1104 08:41:02.186191    1130 operation_generator.go:193] scheme "" not registered, fallback to default scheme
Nov  4 08:41:02 kuber1 kubelet[1130]: I1104 08:41:02.186227    1130 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/zfs-localpv/csi.sock 0  <nil>}] <nil>}
Nov  4 08:41:02 kuber1 kubelet[1130]: I1104 08:41:02.186283    1130 clientconn.go:577] ClientConn switching balancer to "pick_first"
Nov  4 08:41:02 kuber1 kubelet[1130]: E1104 08:41:02.190140    1130 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins/zfs-localpv/csi.sock" failed. No retries permitted until 2019-11-04 08:41:02.69010915 +0000 UTC m=+419.701334758 (durationBeforeRetry 500ms). Error: "RegisterPlugin error -- failed to get plugin info using RPC GetInfo at socket /var/lib/kubelet/plugins/zfs-localpv/csi.sock, err: rpc error: code = Unimplemented desc = unknown service pluginregistration.Registration"
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.186524    1130 operation_generator.go:193] parsed scheme: ""
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.187000    1130 operation_generator.go:193] scheme "" not registered, fallback to default scheme
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.187715    1130 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins_registry/zfs-localpv-reg.sock 0  <nil>}] <nil>}
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.186612    1130 operation_generator.go:193] parsed scheme: ""
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.187941    1130 clientconn.go:577] ClientConn switching balancer to "pick_first"
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.188155    1130 operation_generator.go:193] scheme "" not registered, fallback to default scheme
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.188476    1130 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/zfs-localpv/csi.sock 0  <nil>}] <nil>}
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.188612    1130 clientconn.go:577] ClientConn switching balancer to "pick_first"
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.189827    1130 csi_plugin.go:101] kubernetes.io/csi: Trying to validate a new CSI Driver with name: zfs-localpv endpoint: /var/lib/kubelet/plugins/zfs-localpv/csi.sock versions: 1.0.0, foundInDeprecatedDir: false
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.189889    1130 csi_plugin.go:120] kubernetes.io/csi: Register new plugin with name: zfs-localpv at endpoint: /var/lib/kubelet/plugins/zfs-localpv/csi.sock
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.190503    1130 clientconn.go:104] parsed scheme: ""
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.190555    1130 clientconn.go:104] scheme "" not registered, fallback to default scheme
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.190582    1130 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/zfs-localpv/csi.sock 0  <nil>}] <nil>}
Nov  4 08:41:03 kuber1 kubelet[1130]: I1104 08:41:03.190597    1130 clientconn.go:577] ClientConn switching balancer to "pick_first"
Nov  4 08:41:03 kuber1 kubelet[1130]: E1104 08:41:03.192114    1130 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins/zfs-localpv/csi.sock" failed. No retries permitted until 2019-11-04 08:41:04.192082873 +0000 UTC m=+421.203308506 (durationBeforeRetry 1s). Error: "RegisterPlugin error -- failed to get plugin info using RPC GetInfo at socket /var/lib/kubelet/plugins/zfs-localpv/csi.sock, err: rpc error: code = Unimplemented desc = unknown service pluginregistration.Registration"

After applying sc, pvc, fio.yaml I get pods and pvc in pending state, not able to make it run.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.