Git Product home page Git Product logo

kubestr's Introduction

Kubestr

What is it?

Kubestr is a collection of tools to discover, validate and evaluate your kubernetes storage options.

As adoption of kubernetes grows so have the persistent storage offerings that are available to users. The introduction of CSI (Container Storage Interface) has enabled storage providers to develop drivers with ease. In fact there are around a 100 different CSI drivers available today. Along with the existing in-tree providers, these options can make choosing the right storage difficult.

Kubestr can assist in the following ways-

  • Identify the various storage options present in a cluster.
  • Validate if the storage options are configured correctly.
  • Evaluate the storage using common benchmarking tools like FIO.

asciicast

Resources

Video

Blogs

Using Kubestr

To install the tool -

  • Ensure that the kubernetes context is set and the cluster is accessible through your terminal. (Does kubectl work?)
  • Download the latest release here.
  • Unpack the tool and make it an executable chmod +x kubestr.

To discover available storage options -

  • Run ./kubestr

To run an FIO test -

  • Run ./kubestr fio -s <storage class>
  • Additional options like --size and --fiofile can be specified.
  • For more information visit our fio page.

To check a CSI drivers snapshot and restore capabilities -

  • Run ./kubestr csicheck -s <storage class> -v <volume snapshot class>

To check if a StorageClass supports a block mount -

  • Run ./kubestr blockmount -s StorageClass

Roadmap

  • In the future we plan to allow users to post their FIO results and compare to others.

kubestr's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubestr's Issues

Can`t output results of FIO test for rook-ceph-block storage

Info:
Version kubestr - any
Workstation where run kubestr - macos big sur 11.2.3/Debian 10 (buster)
shell - bash/zsh

Scenario:

$ ./kubestr

Kubernetes Version Check:
Valid kubernetes version (v1.16.15) - OK

RBAC Check:
Kubernetes RBAC is enabled - OK

Aggregated Layer Check:
The Kubernetes Aggregated Layer is enabled - OK

Available Storage Provisioners:

rook-ceph.rbd.csi.ceph.com:
Cluster is not CSI snapshot capable. Requires VolumeSnapshotDataSource feature gate.
This is a CSI driver!
(The following info may not be up to date. Please check with the provider for more information.)
Provider: Ceph RBD
Website: https://github.com/ceph/ceph-csi
Description: A Container Storage Interface (CSI) Driver for Ceph RBD
Additional Features: Raw Block, Snapshot, Expansion, Topology, Cloning

Storage Classes:
  * rook-ceph-block

To perform a FIO test, run-
  ./kubestr fio -s <storage class>

This provisioner supports snapshots, however no Volume Snaphsot Classes were found.

$ ./kubestr fio -s rook-ceph-block
PVC created kubestr-fio-pvc-xvbxj
Pod created kubestr-fio-pod-vkzb8
Running FIO test (default-fio) on StorageClass (rook-ceph-block) with a PVC of Size (100Gi)
Elapsed time- 50.430796167s
FIO test results:
Failed while running FIO test.: Unable to parse fio output into json.: unexpected end of JSON input - Error

Kubectl plugin

By simply changing the binary and storing in PATH we can use Kubestr as a kubectl plugin. We should make it a simple download for multiple OS using curl / wget and move to path for this to work across all OS platforms. Once complete we could also either document in this Repo or create a separate plugin Repo and then see the process to add to krew.

Add a version command

There doesn't seem to be a good way to figure out what version is installed.

Please add a version command or -v flag so I can easily parse when kubestr needs to be updated.

Thanks for the cool project!

panic: runtime error: invalid memory address or nil pointer dereference

Hi,

./kubestr



| |/ / | | | _ ) / | | _
| ' <| |
| | _ \ |_ \ | | | /
||_\
/|
/|/ || ||_\

Explore your kubernetes storage options


Kubernetes Version Check:
Valid kubernetes version (v1.20.2) - OK

RBAC Check:
Kubernetes RBAC is enabled - OK

Aggregated Layer Check:
The Kubernetes Aggregated Layer is enabled - OK

W0402 10:53:35.051737 73756 warnings.go:70] storage.k8s.io/v1beta1 CSIDriver is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 CSIDriver
Available Storage Provisioners:

rook-ceph.cephfs.csi.ceph.com:
Can't find the CSI snapshot group api version.
This is a CSI driver!
(The following info may not be up to date. Please check with the provider for more information.)
Provider: CephFS
Website: https://github.com/ceph/ceph-csi
Description: A Container Storage Interface (CSI) Driver for CephFS
Additional Features: Expansion, Snapshot, Clone

Storage Classes:
  * rook-cephfs

To perform a FIO test, run-
  ./kubestr fio -s <storage class>

This provisioner supports snapshots, however no Volume Snaphsot Classes were found.

./kubestr fio -s rook-cephfs
PVC created kubestr-fio-pvc-klnpv
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x14056a3]

goroutine 1 [running]:
github.com/kastenhq/kubestr/pkg/fio.(*FIOrunner).RunFioHelper.func3(0xc000707b58, 0x0, 0xc000282e40)
/github/workspace/pkg/fio/fio.go:138 +0x33
github.com/kastenhq/kubestr/pkg/fio.(*FIOrunner).RunFioHelper(0xc000707b58, 0x19d5f40, 0xc0001b05a0, 0xc000282e40, 0x0, 0x199a6e0, 0xc0006180a0)
/github/workspace/pkg/fio/fio.go:141 +0x9f0
github.com/kastenhq/kubestr/pkg/fio.(*FIOrunner).RunFio(0xc000031b58, 0x19d5f40, 0xc0001b05a0, 0xc000282e40, 0x2560360, 0x7f40eee877d0, 0x0)
/github/workspace/pkg/fio/fio.go:91 +0x14b
github.com/kastenhq/kubestr/cmd.Fio(0x19d5f40, 0xc0001b05a0, 0x0, 0x0, 0x7ffd1d078dd4, 0xb, 0x17829ca, 0x5, 0x1784087, 0x7, ...)
/github/workspace/cmd/rootCmd.go:160 +0x1ac
github.com/kastenhq/kubestr/cmd.glob..func2(0x2546840, 0xc000130ea0, 0x0, 0x2)
/github/workspace/cmd/rootCmd.go:59 +0x146
github.com/spf13/cobra.(*Command).execute(0x2546840, 0xc000130e80, 0x2, 0x2, 0x2546840, 0xc000130e80)
/go/pkg/mod/github.com/spf13/[email protected]/command.go:854 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0x25465a0, 0x0, 0x184000, 0xc000182058)
/go/pkg/mod/github.com/spf13/[email protected]/command.go:958 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
github.com/kastenhq/kubestr/cmd.Execute(...)
/github/workspace/cmd/rootCmd.go:105
main.Execute(...)
/github/workspace/main.go:29
main.main()
/github/workspace/main.go:24 +0x2f

rename the jobs

Hi,

I find it a bit confusing that the jobs are named like read_iops and read_bw. Aren't those the same thing just with different block sizes? Wouldn't it be better to have the jobs named like randread_4K, randread_128K, randwrite_4K, randwrite_128K instead?

Results produce both iops and bandwidth (and perhaps in the future latency as well) on 4K and 128K. Likely you might even get a slightly higher bandwidth running 1M block size. I think it's important to highlight that you are not just measuring bandwidth in 128K even though higher block size usually results in higher bandwidth.

fio latency not conveyed within json output for default job

First off- cool project. You make it easy to benchmark. Thank you! Planning to continue digging in but wanted to relay a pain point during initial exploration.

In my usage I'm finding the default fio job does not produce latency numbers. In contrast, I notice latency is included when running axboe/fio/examples/latency-profile.fio. I start to think this kubestr behavior is expected and there's likely FIO complexities (axboe/fio/HOWTO.rst) which I haven't yet grasped.

kubestr.default.json.txt attached, created with kubestr v0.4.36 against Azure AKS using:

kubestr fio --storageclass default --output json --outfile kubestr.default.json

As a cluster admin, seeing latency from the default job would grant meaningful insight.

Add support for private container registries

Hi,

would it be possible to add support for pulling the kubstr-fio image from a custom URL?

I think the variable that would need to be overwritten via a command line param would be DefaultPodImage, is that correct?

csicheck create PVC over 10Gi

Hello,
I have a K8s cluster setup on Tencent TKE, which can only accept PVC size over 10Gi. I would like to test the snapshot function with storage class CBS-CSI using kubestr. When I trying to run kubestr with csicheck parameter I get an error and the logs showed as following:

failed to provision volume with StorageClass "cbs-csi": rpc error: code = InvalidArgument desc = disk size is invalid. Must in [10, 32000]

Is there an option to create PVC with kubestr over 10Gi?

Fio client/server mode support

Does kubestr plan on supporting fio in client/server mode (many server pods running fio and sending result to a single client)?

KUBECONFIG ignored when running in-cluster

When kubestr is run inside a k8s pod, explicitly specifying KUBECONFIG has no effect

kubectl behavior

/ # kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:monito-rss:default" cannot list resource "pods" in API group "" in the namespace "monito-rss"
/ # KUBECONFIG=/tmp/k kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
redis-0                           1/1     Running   0          4h44m

kubestr behavior

/ # kubestr 

**************************************
  _  ___   _ ___ ___ ___ _____ ___
  | |/ / | | | _ ) __/ __|_   _| _ \
  | ' <| |_| | _ \ _|\__ \ | | |   /
  |_|\_\\___/|___/___|___/ |_| |_|_\

Explore your Kubernetes storage options
**************************************
Kubernetes Version Check:
  Valid kubernetes version (v1.20.2+k3s1)  -  OK

RBAC Check:
  Kubernetes RBAC is enabled  -  OK

Aggregated Layer Check:
  The Kubernetes Aggregated Layer is enabled  -  OK

Error listing provisioners: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:monito-rss:default" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Error: Error listing provisioners: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:monito-rss:default" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
/ # KUBECONFIG=/tmp/k kubestr

**************************************
  _  ___   _ ___ ___ ___ _____ ___
  | |/ / | | | _ ) __/ __|_   _| _ \
  | ' <| |_| | _ \ _|\__ \ | | |   /
  |_|\_\\___/|___/___|___/ |_| |_|_\

Explore your Kubernetes storage options
**************************************
Kubernetes Version Check:
  Valid kubernetes version (v1.20.2+k3s1)  -  OK

RBAC Check:
  Kubernetes RBAC is enabled  -  OK

Aggregated Layer Check:
  The Kubernetes Aggregated Layer is enabled  -  OK

Error listing provisioners: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:monito-rss:default" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Error: Error listing provisioners: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:monito-rss:default" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope

Specify multiple fio tests to be run sequentially

Hi there,

First of all, thanks for you work on this tool. I've written my own script for benchmarking storage on Kubernetes : ksb.

My results are very differents from kubestr, so I tried to search where was the diff between our tests.

With kubestr cli, i've got between 734 and 849 IOPS in 3 differents run, here is the raw result of the best one :

> ./kubestr fio -s rook-ceph-block
PVC created kubestr-fio-pvc-g8gwx
Pod created kubestr-fio-pod-7tspk
Running FIO test (default-fio) on StorageClass (rook-ceph-block) with a PVC of Size (100Gi)
Elapsed time- 1m23.539474031s
FIO test results:

FIO version - fio-3.20
Global options - ioengine=libaio verify=0 direct=1 gtod_reduce=1

JobName: read_iops
  blocksize=4K filesize=2G iodepth=64 rw=randread
read:
  IOPS=849.953735 BW(KiB/s)=3416
  iops: min=600 max=1053 avg=854.266663
  bw(KiB/s): min=2400 max=4215 avg=3418.100098

As we can see in kubestr output, it used fio-3.20 with parameters :

  • ioengine=libaio
  • verify=0
  • direct=1
  • gtod_reduce=1
  • blocksize=4K
  • filesize=2G
  • iodepth=64
  • rw=randread

My Read IOPS benchmark is using exactly same parameters, but adding --time_based --ramp_time=2s --runtime=15s parameters. Results of 3 differents runs are between 10.6k IOPS and 12.9k IOPS. Here is the raw result of one run:

read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
read_iops: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=0): [f(1)][100.0%][r=33.2MiB/s][r=8486 IOPS][eta 00m:00s]
read_iops: (groupid=0, jobs=1): err= 0: pid=29: Wed Apr 21 10:46:22 2021
  read: IOPS=12.9k, BW=50.4MiB/s (52.9MB/s)(757MiB/15012msec)
   bw (  KiB/s): min=21904, max=98400, per=100.00%, avg=52198.45, stdev=19676.11, samples=29
   iops        : min= 5476, max=24600, avg=13049.48, stdev=4919.07, samples=29
  cpu          : usr=2.58%, sys=6.54%, ctx=203273, majf=0, minf=58
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=193816,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=50.4MiB/s (52.9MB/s), 50.4MiB/s-50.4MiB/s (52.9MB/s-52.9MB/s), io=757MiB (794MB), run=15012-15012msec

Disk stats (read/write):
  rbd6: ios=212839/23, merge=0/3, ticks=769574/599, in_queue=567176, util=96.57%

Both docker images are based on alpine (ghcr.io/kastenhq/kubestr:latest and infrabuilder/iobench). So it does not comes from base OS difference. Anyway, my image is shipped with fio-3.25, and yours contains fio-3.20. To eliminate the image difference hyptohesis, I started a pod mounting a PVC from the exact same storageClass mounted on /root, using your image (ghcr.io/kastenhq/kubestr:latest) and overriding entrypoint with /bin/sh to let me do exec command in it. Here is the result of a manually launched fio command :

# fio --randrepeat=0 --verify=0 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=read_iops --filename=/root/fiotest --bs=4K --iodepth=64 --size=2G  --readwrite=randread
read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.20
Starting 1 process
read_iops: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=60.2MiB/s][r=15.4k IOPS][eta 00m:00s]
read_iops: (groupid=0, jobs=1): err= 0: pid=35: Wed Apr 21 10:25:15 2021
  read: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(2048MiB/48355msec)
   bw (  KiB/s): min=12665, max=99328, per=99.89%, avg=43319.69, stdev=21317.01, samples=96
   iops        : min= 3166, max=24832, avg=10829.68, stdev=5329.23, samples=96
  cpu          : usr=1.70%, sys=5.80%, ctx=555542, majf=0, minf=71
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=524288,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=2048MiB (2147MB), run=48355-48355msec

Disk stats (read/write):
  rbd5: ios=519717/145, merge=0/27, ticks=2344134/3451, in_queue=1808204, util=95.04%

Here the result is very similar to the one I got on my benchmark : 10,8k IOPS (I've run it multiple times, results are between 9729 and 10.8k IOPS)

So to sum up :

  • kubestr default : 734 to 849 IOPS (stddev unknown)
  • ksb default : 10,6k to 12.9k IOPS (stddev 4919)
  • kubestr image with manual fio command : 9729 to 10.8k IOPS (stddev 5329)

If my benchmark and in your image with manual command, the stddev is very high (about 5k) due to the fact that it is a used cluster and not a lab isolated dedicated to the bench, but even wih this deviation there is still a large gap between kubestr results and fio results. I may test in isolated lab, as I do for CNI benchmark, but for now I lack of time :)

Can you explain why kubestr and fio commands results differs so much, even with same image ?

Thanks.

Update Readme

I think Readme under docs can use some more wording around "What the script does". Along with alternative methods of deploying the project locally.

Support for ARM64 devices

Hello,

I have a K3s cluster setup on Odroid HC4 devices, which are using a ARM64 processor. I would like to benchmark the storage classes I have there using kubestr. When trying to run kubestr I get an error and the logs in the container show, that most probably the binary was compiled for amd64:

$ kubectl logs kubestr-fio-pod-s2hhl
standard_init_linux.go:228: exec user process caused: exec format error

Is there an option to create and publish kubestr Docker image for ARM64?

kubestr making 1 Go PVC not respecting storage request

Hey,

Started kubestr with -z 45 which should have created a 45 Go PVC. But instead made a 1 Go PVC.

k get pvc 
NAME                    STATUS        VOLUME                                                                   CAPACITY   ACCESS MODES   STORAGECLASS            AGE
kubestr-fio-pvc-dfjm8   Terminating   ovh-managed-kubernetes-9x77sb-pvc-7a9bc017-82dc-4580-9b7a-9921c649f633   1Gi        RWO            csi-cinder-high-speed   49s
./kubestr fio -s csi-cinder-high-speed -z 45
PVC created kubestr-fio-pvc-dfjm8
Pod created kubestr-fio-pod-stk9l
Running FIO test (default-fio) on StorageClass (csi-cinder-high-speed) with a PVC of Size (45)
Elapsed time- 2.2568953s
FIO test results:
  Failed while running FIO test.: Error running command:([fio --directory /dataset /etc/fio-config/default-fio --output-format=json]), stderr:(fio: pid=0, err=28/file:filesetup.c:240, func=write, error=No space left on device
fio: pid=0, err=28/file:filesetup.c:240, func=write, error=No space left on device
fio: io_u error on file /dataset/job2.0.0: No space left on device: write offset=1588449280, buflen=4096
[...]
fio: io_u error on file /dataset/job2.0.0: No space left on device: write offset=1073922048, buflen=4096
fio: io_u error on file /dataset/job4.0.0: No space left on device: write offset=1351221248, buflen=131072): Failed to exec command in pod: command terminated with exit code 4  -  Error
Error: Failed while running FIO test.: Error running command:([fio --directory /dataset /etc/fio-config/default-fio --output-format=json]), stderr:(fio: pid=0, err=28/file:filesetup.c:240, func=write, error=No space left on device
fio: pid=0, err=28/file:filesetup.c:240, func=write, error=No space left on device
fio: io_u error on file /dataset/job2.0.0: No space left on device: write offset=1588449280, buflen=4096
fio: io_u error on file /dataset/job2.0.0: No space left on device: write offset=1073922048, buflen=4096
[...]

fio: io_u error on file /dataset/job4.0.0: No space left on device: write offset=1021313024, buflen=131072
fio: io_u error on file /dataset/job4.0.0: No space left on device: write offset=1351221248, buflen=131072): Failed to exec command in pod: command terminated with exit code 4
```

Using:
Kubestr [v0.4.36](https://github.com/kastenhq/kubestr/releases/tag/v0.4.36)
Kubernetes 1.24.3

Am I doing something wrong ?

registered csi drivers

Cool tool! I'm the author of https://github.com/democratic-csi/democratic-csi and plan to use this for some testing. I noticed when running the binary and it detects drivers for me it shows this:

kubestr-v0.4.16

**************************************
  _  ___   _ ___ ___ ___ _____ ___
  | |/ / | | | _ ) __/ __|_   _| _ \
  | ' <| |_| | _ \ _|\__ \ | | |   /
  |_|\_\\___/|___/___|___/ |_| |_|_\

Explore your Kubernetes storage options
**************************************
Kubernetes Version Check:
  Valid kubernetes version (v1.19.4)  -  OK

RBAC Check:
  Kubernetes RBAC is enabled  -  OK

Aggregated Layer Check:
  The Kubernetes Aggregated Layer is enabled  -  OK

Available Storage Provisioners:

  org.democratic-csi.nfs:
    This might be a CSI Driver. But it is not publicly listed.

    Storage Classes:
      * zfs-nfs

    To perform a FIO test, run-
      ./kubestr fio -s <storage class>

  org.democratic-csi.nfs-client:
    This might be a CSI Driver. But it is not publicly listed.

    Storage Classes:
      * nfs-client

    To perform a FIO test, run-
      ./kubestr fio -s <storage class>

  org.democratic-csi.iscsi:
    This might be a CSI Driver. But it is not publicly listed.

    Storage Classes:
      * zfs-iscsi

    To perform a FIO test, run-
      ./kubestr fio -s <storage class>

What does it take to be 'publicly listed' and what additional information is shown if it is? In the case of democratic-csi it's up to the user to decide what the driver names are (although generally they'll follow the syntax above) as it can be deployed any number of times with slightly different configurations.

Validate Port availability early on

In the current implementation we don't check the port til after we set up the cloned PVC and app. This should be done at the beginning to catch errors early on.

Kubernetes toleration for FIO

Kubestr currently does not allow scheduling FIO benchmark pods on nodes with taints.

I've come up with several ways that could be achieved:

  • Ignore all taints (changes current behaviour)
  • Have a flag to ignore all taints
  • Have a flag for toleration expressions (but only implement an expression for universal toleration for now)

For the last option, I suggest a syntax based off kubectl's taint expression:

key[=value][:effect]|:effect

However, it needs a way to specify a universal toleration. I came up with these possibilities:

  • : – An empty effect in the syntax above would mean "any". This is in line with the fact that "" specifies any effect in the definition of Toleration.
  • :Any – This mimics an actual effect name (effect names use an uppercase starting letter and are case sensitive), but this also means it could clash with an actual "Any" effect (I find this unlikely to be added though).
  • :any – Using a lowercase starting letter to denote a special symbol
  • ::Any – Extra : to denote a special symbol

Or maybe these is an established solution that I'm not aware of.

I'm not sure which way would be preferred. To me, toleration :/toleration=: seems to be the cleanest and most future-proof solution.

@ihcsim @smasset-orange

Rewrite LoadConfigMaps

At the beginning there wasn't a clear idea of how to handle input, so it's a bit convoluted.

kubestr should support volumeMode: Block

I could not find any information in the docs that would allow me to use volumeMode: Block and issue the benchmark against a devicePath instead of a mountPath.

add latency metrics

Hi, Cool project, thanks!

I have a feature request, would it be possible to add latency metrics as well? Those are usually more important than bandwidth in some workloads like clustered databases or key values stores and such..

Sanity check input fio files

Input fio files may have options pointing to where they should run the FIO test. Since these are controlled by the fio program they should be filtered out of the file.

[Readme] Add installation steps

hi,

it should be useful and more user friendly to tell how to install the tool on Linux, Mac, Windows, through krew...
Thx :-)

Failed test should result in non-zero exit code

Both syntax errors with the commands and runtime errors with the tests themselves always result in exit code 0. It's customary for command line tools to set a non-zero exit code when there's been an error.

kubestr-csi-original-pod = ImagePullError

Running Kubestr after successful deployments yesterday, seeing ImageBackoffs today.

Normal BackOff 12s (x2 over 13s) kubelet Back-off pulling image "ghcr.io/kastenhq/kubestr:latest"
Warning Failed 12s (x2 over 13s) kubelet Error: ImagePullBackOff
Normal Pulling 2s (x2 over 14s) kubelet Pulling image "ghcr.io/kastenhq/kubestr:latest"
Warning Failed 1s (x2 over 14s) kubelet Failed to pull image "ghcr.io/kastenhq/kubestr:latest": rpc error: code = Unknown desc = error pulling image configuration: denied: unauthenticated: User cannot be authenticated with the token provided.
Warning Failed 1s (x2 over 14s) kubelet Error: ErrImagePull

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.