Git Product home page Git Product logo

cubefs-helm's Introduction

cubefs-helm

Deploy Cubefs using Kubernetes and Helm

The cubefs-helm project helps deploy a Cubefs cluster orchestrated by Kubernetes.

Cubefs Components

Cubefs Components

Cubefs Deployment

Cubefs Deployment

Prerequisite

  • Kubernetes 1.14+
  • CSI spec version 1.1.0
  • Helm 3

Download cubefs-helm

git clone https://github.com/cubefs/cubefs-helm
cd cubefs-helm

Create configuration yaml file

Create a cubefs.yaml file, and put it in a user-defined path. Suppose this is where we put it.

vim ~/cubefs.yaml 
# Select which component to install
component:
  master: true
  datanode: true
  metanode: true
  objectnode: true
  client: false
  provisioner: false
  monitor: false
  ingress: true

# store data,log and other data, these directory will be
#  mounted from host to container using hostPath
path:
  data: /var/lib/cubefs
  log: /var/log/cubefs

datanode:
  # Disks will be used by datanode to storage data
  # Format: disk_mount_point:reserved_space
  # disk_mount_point: the mount point of disk in machine
  # reserved_space: similar to metanode reserved space, if disk available
  # space less than this number, then the disk will be unwritable
  disks:
    - /data0:21474836480
    - /data1:21474836480

metanode:
  # Total memory metanode can use, recommended to be configured
  # as 80% of physical machine memory
  total_mem: "26843545600"

provisioner:
  kubelet_path: /var/lib/kubelet

Note that cubefs/values.yaml shows all the config parameters of Cubefs. The parameters path.data and path.log are used to store server data and logs, respectively.

Add labels to Kubernetes node

You should tag each Kubernetes node with the appropriate labels accorindly for server node and CSI node of Cubefs.

kubectl label node <nodename> component.cubefs.io/master=enabled
kubectl label node <nodename> component.cubefs.io/metanode=enabled
kubectl label node <nodename> component.cubefs.io/datanode=enabled
kubectl label node <nodename> component.cubefs.io/objectnode=enabled
kubectl label node <nodename> component.cubefs.io/csi=enabled

Deploy Cubefs cluster

helm upgrade --install cubefs  -f ~/cubefs.yaml -n cubefs --create-namespace cubefs

The output of helm install shows servers to be deployed.

Use the following command to check pod status, which may take a few minutes.

kubectl -n cubefs get pods
NAME                         READY   STATUS    RESTARTS   AGE
cfs-csi-controller-cfc7754b-ptvlq   3/3     Running   0          2m40s
cfs-csi-node-q262p                  2/2     Running   0          2m40s
cfs-csi-node-sgvtf                  2/2     Running   0          2m40s
client-55786c975d-vttcx             1/1     Running   0          2m40s
consul-787fdc9c7d-cvwgz             1/1     Running   0          2m40s
datanode-2rcmz                      1/1     Running   0          2m40s
datanode-7c9gv                      1/1     Running   0          2m40s
datanode-s2w8z                      1/1     Running   0          2m40s
grafana-6964fd5775-6z5lx            1/1     Running   0          2m40s
master-0                            1/1     Running   0          2m40s
master-1                            1/1     Running   0          2m34s
master-2                            1/1     Running   0          2m27s
metanode-bwr8f                      1/1     Running   0          2m40s
metanode-hdn5b                      1/1     Running   0          2m40s
metanode-w9snq                      1/1     Running   0          2m40s
objectnode-6598bd9c87-8kpvv         1/1     Running   0          2m40s
objectnode-6598bd9c87-ckwsh         1/1     Running   0          2m40s
objectnode-6598bd9c87-pj7fc         1/1     Running   0          2m40s
prometheus-6dcf97d7b-5v2xw          1/1     Running   0          2m40s

Check cluster status

helm status cubefs

Use Cubefs CSI as backend storage

After installing Cubefs using helm, the StorageClass named cfs-sc of Cubefs has been created. Next, you can to create a PVC that the storageClassName value is cfs-sc to using Cubefs as backend storage.

An example pvc.yaml is shown below.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cfs-pvc
spec:
  accessModes:
  - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
  storageClassName: cfs-sc
kubectl create -f pvc.yaml

There is an example deployment.yaml using the PVC as below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cfs-csi-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cfs-csi-demo-pod
  template:
    metadata:
      labels:
        app: cfs-csi-demo-pod
    spec:
      nodeSelector:
        cubefs-csi-node: enabled
      containers:
        - name: cfs-csi-demo
          image: nginx:1.17.9
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 80
              name: "http-server"
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: cfs-pvc
kubectl create -f deployment.yaml

Config Monitoring System (optional)

Monitor daemons are started if the cluster is deployed with cubefs-helm. Cubefs uses Consul, Prometheus and Grafana to construct the monitoring system.

Accessing the monitor dashboard requires Kubernetes Ingress Controller. In this example, the Nginx Ingress is used. Download the default config yaml file, and add hostNetwork: true in the spec section.

spec:
  # wait up to five minutes for the drain of connections
  terminationGracePeriodSeconds: 300
  serviceAccountName: nginx-ingress-serviceaccount
  hostNetwork: true
  nodeSelector:
    kubernetes.io/os: linux

Start the ingress controller

kubectl apply -f mandatory.yaml

Get the IP address of Nginx ingress controller.

kubectl get pods --all-namespaces -o wide | grep nginx-ingress-controller
ingress-nginx   nginx-ingress-controller-5bbd46cd86-q88sw    1/1     Running   0          115m   10.196.31.101   host-10-196-31-101   <none>           <none>

Get the host name of Grafana which should also be used as domain name.

kubectl get ingress -n cubefs
NAME      HOSTS                  ADDRESS         PORTS   AGE
grafana   monitor.cubefs.com   10.106.207.55   80      24h

Add a local DNS in /etc/hosts in order for a request to find the ingress controller.

10.196.31.101 monitor.cubefs.com

At this point, dashboard can be visited by http://monitor.cubefs.com.

Uninstall Cubefs

uninstall Cubefs cluster using helm

helm delete cubefs

cubefs-helm's People

Contributors

ahmedwaleedmalik avatar awzhgw avatar catalingmn avatar chengyu-l avatar cndoit18 avatar fengshunli avatar guangbochen avatar heymingwei avatar huweicai avatar jadewang198510 avatar mervinkid avatar mhausenblas avatar n0rad avatar shuoranliu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

cubefs-helm's Issues

namespaces "chubaofs" already exists

helm upgrade chubaofs ./chubaofs -f ../values.yaml -n chubaofs -i --create-namespace
Release "chubaofs" does not exist. Installing it now.
I1024 12:10:11.596325 436452 request.go:665] Waited for 1.089488588s due to client-side throttling, not priority and fairness, request: GET:https://api.control-cluster-raffa.demo.red-chesterfield.com:6443/apis/apps.open-cluster-management.io/v1?timeout=32s
Error: namespaces "chubaofs" already exists
rspazzol@rspazzol ~/git/openshift-enablement-exam/misc4.0/chubaufs/chubaofs-helm (master)* $ helm install chubaofs ./chubaofs -f ../values.yaml
I1024 12:11:08.457516 436677 request.go:665] Waited for 1.113251268s due to client-side throttling, not priority and fairness, request: GET:https://api.control-cluster-raffa.demo.red-chesterfield.com:6443/apis/search.acm.com/v1alpha1?timeout=32s
Error: INSTALLATION FAILED: create: failed to create: namespaces "chubaofs" not found

使用helm安装后metanode无法启动

使用helm安装后metanode无法启动
#values.yml中关于metanode的配置

metanode:
  labels:
    node_selector_key: chubaofs-metanode
    node_selector_value: enabled
  log_level: error
  total_mem: "26843545600"
  port: 17210
  prof: 17220
  raft_heartbeat: 17230
  raft_replica: 17240
  exporter_port: 9510
  resources:
    enabled: true
    requests:
      memory: "4Gi"
      cpu: "2000m"
    limits:
      memory: "4Gi"
      cpu: "2000m"

#kubectl -n chubaofs get pods
image

#kubectl -n chubaofs describe po metanode-gtwgk
image

#kubectl -n chubaofs logs metanode-gtwgk
image

wrong tag in values for docker images

The Helm chart's image tags for cfs-client and cfs-server are set as 3.2.0. However, on DockerHub, these images are tagged with a v prefix, i.e., v3.2.0.
This discrepancy is causing Kubernetes to throw an ImagePullBackOff error as it's unable to find the images tagged as 3.2.0.

Relax Kube version in charts

Currently, the Kubernetes version is hard-coded to 1.16.0 in the chart: https://github.com/chubaofs/chubaofs-helm/blob/dbfbd413c5deaae21de9ac983d242541334e0d38/chubaofs/Chart.yaml#L5

This leads to the following error when applying it to a 1.16.3 cluster:

$ helm install chubaofs chubaofs/chubaofs --version 1.4.1 -f chubaofs.yaml
Error: chart requires kubeVersion: 1.16.0 which is incompatible with Kubernetes v1.16.3

If we'd change the value of kubeVersion to >=1.16.0 then this should be fine.

k3s helm install failed

Helm version version.BuildInfo{Version:"v3.12.3"
k3s version Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3+k3s1"

using lint will catch this error

helm lint ./cubefs

[ERROR] templates/: parse error at (cubefs/templates/_helpers.tpl:127): multiple definition of template "cubefs.kubernetes.version"

Error: 1 chart(s) linted, 1 chart(s) failed

cubefs/cubefs-server:2.4.0

cubefs/cubefs-server:2.4.0
企业微信截图_16679023908237

Whether the cubefs mirror is open or not, the local and cluster cannot be pulled down

console一直处于CrashLoopBackOff状态

把/cfs/logs/ 挂载到持久化目录,也没有生成任何日志

console容器启动日志
{
"role": "console",
"logDir": "/cfs/logs/",
"logLevel": "error",
"listen": "1602",
"master_instance": "master-service:17010",
"objectNodeDomain": "console.chubaofs.com",
"masterAddr": [
"master-0.master-service:17010",
"master-1.master-service:17010",
"master-2.master-service:17010",
"master-3.master-service:17010"
],
"monitor_addr": "http://prometheus-service:9090",
"dashboard_addr": "http://monitor.chubaofs.com",
"monitor_app": "cfs",
"monitor_cluster": "my-cluster"
}
start console

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.