Git Product home page Git Product logo

mayastor-docs's Introduction

Welcome to OpenEBS

OpenEBS Welcome Banner

OpenEBS is a modern Block-Mode storage platform, a Hyper-Converged software Storage System and virtual NVMe-oF SAN (vSAN) Fabric that is natively integrates into the core of Kubernetes.

Try our Slack channel
If you have questions about using OpenEBS, please use the CNCF Kubernetes OpenEBS slack channel, it is open for anyone to ask a question

Important

OpenEBS provides...

  • Stateful persistent Dynamically provisioned storage volumes for Kubernetes
  • High Performance NVMe-oF & NVMe/RDMA storage transport optimized for All-Flash Solid State storage media
  • Block devices, LVM, ZFS, ext2/ext3/ext4, XFS, BTRFS...and more
  • 100% Cloud-Native K8s declarative storage platform
  • A cluster-wide vSAN block-mode fabric that provides containers/Pods with HA resilient access to storage across the entire cluster.
  • Node local K8s PV's and n-way Replciated K8s PV's
  • Deployable On-premise & in-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • Enterprise Grade data management capabilities such as snapshots, clones, replicated volumes, DiskGroups, Volume Groups, Aggregates, RAID

openEBS has 2 Editions:

1. STANDARD ✔️ > Ready Player 1
2. LEGACY ⚠️ Game Over

Within STANDARD, you have a choice of 2 Types of K8s Storage Services. Replicated PV and Local PV.


Type Storage Engine Type of data services Status In OSS ver
Replicated_PV Replicated data volumes (in a Cluster wide vSAN block mode fabric)
Replicated PV Mayastor Mayastor for High Availability deploymemnts distributing & replicating volumes across the cluster Stable, deployable in PROD
Releases
v4.0.1
 
Local PV Non-replicated node local data volumes (Local-PV has multiple variants. See below) v4.0.1
Local PV Hostpath Local PV HostPath for integration with local node hostpath (e.g. /mnt/fs1) Stable, deployable in PROD
Releases
v4.0.1
Local PV ZFS Local PV ZFS for integration with local ZFS storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV LVM2 Local PV LVM for integration with local LVM2 storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV Rawfile Local PV Rawfile for integration with Loop mounted Raw device-file filesystem Stable, deployable in PROD, undergoing evaluation & integration
release: v0.70
v4.0.1

STANDARD is optimized for NVMe and SSD Flash storage media, and integrates ultra modern cutting-edge high performance storage technologies at its core...

☑️   It uses the High performance SPDK storage stack - (SPDK is an open-source NVMe project initiated by INTEL)
☑️   The hyper modern IO_Uring Linux Kernel Async polling-mode I/O Interface - (fastest kernel I/O mode possible)
☑️   Native abilities for RDMA and Zero-Copy I/O
☑️   NVMe-oF TCP Block storage Hyper-converged data fabric
☑️   Block layer volume replication
☑️   Logical volumes and Diskpool based data managment
☑️   a Native high performance Blobstore
☑️   Native Block layer Thin provisioning
☑️   Native Block layer Snapshots and Clones

Get in touch with our team.

Vishnu Attur :octocat: @avishnu Admin, Maintainer
Abhinandan Purkait 😎 @Abhinandan-Purkait Maintainer
Niladri Halder 🚀 @niladrih Maintainer
Ed Robinson 🐶 @edrob999   CNCF Primary Liason
Special Maintainer
Tiago Castro @tiagolobocastro   Admin, Maintainer
David Brace @orville-wright     Admin, Maintainer

Activity dashbaord

Alt

Current status

Release Support Twitter/X Contrib License statue CI Staus
Releases Slack channel #openebs Twitter PRs Welcome FOSSA Status CII Best Practices

Read this in 🇩🇪 🇷🇺 🇹🇷 🇺🇦 🇨🇳 🇫🇷 🇧🇷 🇪🇸 🇵🇱 🇰🇷 other languages.

Deployment

  • In-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • On-Premise: Bare Metal, Virtualzied Hypervisor infra using VMWare ESXi, KVM/QEMU (K8s KubeVirt), Proxmox
  • Deployed as native K8s elemets: Deployments, Containers, Servcies, Stateful sets, CRD's, Sidecars, Jobs and Binaries all on K8s worker nodes.
  • Runs 100% in K8s userspace. So it's highly portable and run across many OS's & platforms.

Roadmap (as of June 2024)


OpenEBS Welcome Banner

QUICKSTART : Installation

NOTE: Depending on which of the 5 storage engines you choose to deploy, pre-requests that must be met. See detailed quickstart docs...


  1. Setup helm repository.
# helm repo add openebs https://openebs.github.io/openebs
# helm repo update

2a. Install the Full OpenEBS helm chart with default values.

  • This installs ALL OpenEBS Storage Engines* in the openebs namespace and chart name as openebs:
    Local PV Hostpath, Local PV LVM, Local PV ZFS, Replicated Mayastor
# helm install openebs --namespace openebs openebs/openebs --create-namespace

2b. To Install just the OpenEBS Replicated Mayastor Storage Engine, use the following command:

# helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace
  1. To view the chart
# helm ls -n openebs

Output:
NAME     NAMESPACE   REVISION  UPDATED                                   STATUS     CHART           APP VERSION
openebs  openebs     1         2024-03-25 09:13:00.903321318 +0000 UTC   deployed   openebs-4.0.1   4.0.1
  1. Verify installation
    • List the pods in namespace
    • Verify StorageClasses
# kubectl get pods -n openebs

Example Ouput:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-agent-core-674f784df5-7szbm               2/2     Running   0          11m
openebs-agent-ha-node-nnkmv                       1/1     Running   0          11m
openebs-agent-ha-node-pvcrr                       1/1     Running   0          11m
openebs-agent-ha-node-rqkkk                       1/1     Running   0          11m
openebs-api-rest-79556897c8-b824j                 1/1     Running   0          11m
openebs-csi-controller-b5c47d49-5t5zd             6/6     Running   0          11m
openebs-csi-node-flq49                            2/2     Running   0          11m
openebs-csi-node-k8d7h                            2/2     Running   0          11m
openebs-csi-node-v7jfh                            2/2     Running   0          11m
openebs-etcd-0                                    1/1     Running   0          11m
openebs-etcd-1                                    1/1     Running   0          11m
openebs-etcd-2                                    1/1     Running   0          11m
...
# kubectl get sc

Example Output:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-localpv-provisioner-6ddf7c7978-jsstg      1/1     Running   0          3m9s
openebs-lvm-localpv-controller-7b6d6b4665-wfw64   5/5     Running   0          3m9s
openebs-lvm-localpv-node-62lnq                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-lhndx                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-tlcqv                    2/2     Running   0          3m9s
openebs-zfs-localpv-controller-f78f7467c-k7ldb    5/5     Running   0          3m9s
...

For more details, please refer to OpenEBS Documentation.

CNCF logo OpenEBS is a CNCF project and DataCore, Inc is a CNCF Silver member. DataCore support's CNCF extensively and has funded OpenEBS participating in every KubeCon event since 2020. Our project team is managed under the CNCF Storage Landscape and we contribute to the CNCF CSI and TAG Storage project initiatives. We proudly support CNCF Cloud Native Community Groups initiatives.

Project updates, subscribe to OpenEBS Announcements
Interacting with other OpenEBS users, subscribe to OpenEBS Users


Container Storage Interface group Storage Technical Advisory Group     Cloud Native Community Groups

Commercial Offerings

Commerically supported deployments of openEBS are avaialble via key companies. (Some provide services, funding, technology, infra, rescourced to the openEBS proejct).

(openEBS OSS is a CNCF project. CNCF does not endorse any specific company).

mayastor-docs's People

Contributors

abhinandan-purkait avatar ajdatacore avatar anupriya0703 avatar arne-rusek avatar avishnu avatar balaharish7 avatar cmontemuino avatar datacore-gthomas avatar dyasny avatar geier avatar gila avatar glennbullingham avatar karanssj4 avatar niladrih avatar payes avatar peuh avatar reitermarkus avatar tiagolobocastro avatar vikgaur avatar zimmertr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mayastor-docs's Issues

microk8s install instructions are incorrect

Currently following command is to be used:

helm install mayastor mayastor/mayastor -n mayastor --create-namespace --set values.csi.node.kubeletDir="/var/snap/microk8s/common/var/lib/kubelet
But values.csi.node.kubeletDir is wrong, it should be csi.node.kubeletDir
i.e. the correct command is :

helm install mayastor mayastor/mayastor -n mayastor --create-namespace --set csi.node.kubeletDir="/var/snap/microk8s/common/var/lib/kubelet"

NVMe NQN addressing documentation is unclear

In the Mayastor docs on https://mayastor.gitbook.io/introduction/quickstart/configure-mayastor#what-is-a-mayastor-pool-msp we list the nvme addressing scheme.

The example given is: nvme://nqn.2014-08.com.vendor:nvme:nvm-subsystem-sn-d7843

The nvme tool reports in this syntax.

$ sudo nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev  
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     S462NF0MA01130H      Samsung SSD 970 PRO 1TB                  1         843.50  GB /   1.02  TB    512   B +  0 B   1B2QEXP7
/dev/nvme1n1     PHKS750500HT375AGN   INTEL SSDPED1K375GA                      1         375.08  GB / 375.08  GB    512   B +  0 B   E2010324

$ sudo nvme list-subsys
nvme-subsys0 - NQN=nqn.2014.08.org.nvmexpress:144d144dS462NF0MA01130H     Samsung SSD 970 PRO 1TB                 
\
 +- nvme0 pcie 0000:01:00.0 live 
nvme-subsys1 - NQN=nqn.2014.08.org.nvmexpress:80868086PHKS750500HT375AGN  INTEL SSDPED1K375GA                     
\
 +- nvme1 pcie 0000:04:00.0 live 

We should consider providing an example detailing how to build this syntax up from commonly available tools.

documentation error

On "Deploy Mayastor" the very last command:

kubectl mayastor get nodes

should read

kubectl -n mayastor get nodes

Error creating mayastor storageclass due ioTimeout option

I tried to create mayastore storageclass using next example:

https://mayastor.gitbook.io/introduction/quickstart/configure-mayastor#create-mayastor-storageclass-s

but I´ve got:
Error from server (BadRequest): error when creating "STDIN": StorageClass in version "v1" cannot be handled as a StorageClass: v1.StorageClass.Parameters: ReadString: expects " or n, but found 6, error found in #10 byte of ...|Timeout":60,"protoco|..., bigger context ...|name":"mayastor-nvmf"},"parameters":{"ioTimeout":60,"protocol":"nvmf","repl":"1"},"provisioner":"io.|...

If I remove line 9, I can create it:

ioTimeout: 60

Deploy page should remind users about labels

If users miss the bottom step of https://mayastor.gitbook.io/introduction/quickstart/preparing-the-cluster#label-mayastor-node-candidates before they start running https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor#create-mayastor-application-resources, they may see their kubectl -n mayastor get msp showing a blank under "state".

We can help mitigate this user frustration by giving them a breadcrumb reminding them about the labels before.

Rest-deployment.yaml fails due to OOM Killed

Please update the resources available on you official webpage as the resources that are being set for the resources is hugely to low, not only for rest-deployment but also for other components. When working with such resources they like to fail on large workloads and this for sure gives people the bad feeling that Mayastor is incomplete.

Referring to: https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor
In site header: REST
Link: https://raw.githubusercontent.com/openebs/mayastor-control-plane/v1.0.4/deploy/rest-deployment.yaml

There you've got:

      containers:
        - name: rest
          resources:
            limits:
              cpu: 100m
              memory: 64Mi
            requests:
              cpu: 50m
              memory: 32Mi

In our case the minimum required setup for the REST Deployment (Pod) to get out of OOMKill loop was:

      containers:
        - name: rest
          resources:
            limits:
              cpu: 1500m
              memory: 2048Mi
            requests:
              cpu: 500m
              memory: 256Mi

And the above was only to get out of the Mayastor deployment stage.

Lab-Environment, Kubernetes is a bare metal kubespray deployment:

kubernetes: "v1.25.3"
nodes:
- name: node_group_1
  count: 4
  linux_distro: "Ubuntu 22.04"
  hardware:
    cpu_type: "Intel Xeon ICX"
    cpu_count: 1
    ram: "512GB"
    drives:
    - device: "nvme0"
      size: "349.32GB"
    - device "sda"
      size: "256GB"
- name: node_group_2
  count: 4
  linux_distro: "Ubuntu 22.04"
  hardware:
    cpu_type: "Intel Xeon ICX"
    cpu_count: 2
    ram: "1024GB"
    drives:
    - device: "nvme0"
      size: "1.46TB"
    - device: "nvme1"
      size: "1.46TB"
    - device: "nvme2"
      size: "1.46TB"
    - device: "nvme3"
      size: "1.46TB"
    - device: "sda"
      size: "447GB"

introduction/quickstart/deploy-mayastor: Missing troubleshooting info and baseline waiting times

https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor and further pages

In several location it is said we should be waiting for an action to complete. It would be good to have a baseline time specified, e.g.

  1. apply mayastor-daemonset.yaml
  2. this should take a few seconds/minutes/up to half an hour etc.

When the expected time is exceeded, I would like to try and troubleshoot the issue, there is no explanation on how to do that - which logs and where to check, what to look for in those logs etc.

Mayastor with builtin etcd

Greetings,

It seems for me personally this part of the docs put me in a very difficult situation to debug: https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor#etcd has no examples on a mayatstor deployment when etcd its already part of the k8s cluster, like in case of Talos.dev .

Would be ideal when you mention that mayastor needs etcd, and that users are responsible for installing it, what about where its installed already how do we link that to mayastor, without the need to have 2 etcd deployments, since if we apply the examples given by the docs, we will end up at least on Talos OS deployments with 2 different etcd deployments.

Best/ideal would be when mayastor can detect if etcd is present, or a switch to allow the deployment of mayastor in a ready etcd os or in an os where etcd lacks.

After deploying all that was to deploy, alto our cluster has etcd out of the box we get:

[2021-12-17T23:43:34.090193527+00:00 ERROR mayastor::persistent_store:persistent_store.rs:89] Failed to connect to etcd on endpoint mayastor-etcd:2379. Retrying...

Following commands will indeed deploy etcd yet like mentioned above, we already run etcd.

Would like to know how or what we can use when we have etcd in place. Seems by default this etcd is not known to mayastor, and neither is this much documented, or maybe I missed it out.

kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/etcd/statefulset.yaml 
kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/etcd/svc.yaml
kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/etcd/svc-headless.yaml

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.