Git Product home page Git Product logo

helmit's Introduction

Helmit

Safety first!

Build Status Go Report Card License GoDoc

Helmit is a Golang framework and tool for end-to-end testing of Kubernetes applications. The Helmit Go API and the helmit command line tool work together to manage the deployment, testing, benchmarking, and verification of Helm-based applications running in Kubernetes.

Helmit can be used to:

  • Verify Helm charts and the resources they construct
  • Run end-to-end tests to verify a service/API
  • Run end-to-end benchmarks for Kubernetes applications
  • Scale benchmarks across multi-node Kubernetes clusters
  • Run randomized client simulations for Kubernetes applications (e.g. for formal verification)

User Guide

Examples

Acknowledgements

Helmit is a project of the Open Networking Foundation.

ONF

helmit's People

Contributors

adibrastegarnia avatar kuujo avatar onos-builder avatar ray-milkey avatar seancondon avatar tomikazi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helmit's Issues

helmit cannot run consecutive suite tests

Just run onos-config integrations tests:

helmit test ./cmd/onos-config-tests -c ../onos-helm-charts

And you will get this error after cli tests are finished and gnmi tests started.

2020/08/07 21:34:12 CRD databases.cloud.atomix.io is already present. Skipping.
2020/08/07 21:34:12 creating 1 resource(s)
2020/08/07 21:34:12 CRD members.cloud.atomix.io is already present. Skipping.
2020/08/07 21:34:12 creating 1 resource(s)
2020/08/07 21:34:12 CRD partitions.cloud.atomix.io is already present. Skipping.
2020/08/07 21:34:12 creating 1 resource(s)
2020/08/07 21:34:12 CRD primitives.cloud.atomix.io is already present. Skipping.
2020/08/07 21:34:12 Clearing discovery cache
2020/08/07 21:34:12 beginning wait for 0 resources with timeout of 1m0s
    gnmi: test.go:65: test panicked: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: default, name: onos-config-atomix-kubernetes-controller, existing_kind: /v1, Kind=ServiceAccount, new_kind: /v1, Kind=ServiceAccount
        goroutine 11 [running]:
        runtime/debug.Stack(0xc00021bcd8, 0x1e3eb80, 0xc000fff8a0)
        	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
        github.com/onosproject/helmit/pkg/test.failTestOnPanic(0xc0002c2c60)
        	/Users/adibrastegarnia/go/pkg/mod/github.com/onosproject/[email protected]/pkg/test/test.go:65 +0x57
        panic(0x1e3eb80, 0xc000fff8a0)
        	/usr/local/go/src/runtime/panic.go:975 +0x3e3
        github.com/onosproject/helmit/pkg/test.RunTests(0xc0002c2c60, 0x1fa7120, 0x34d8580, 0x0, 0x0, 0x0)
        	/Users/adibrastegarnia/go/pkg/mod/github.com/onosproject/[email protected]/pkg/test/test.go:91 +0x5ad
        github.com/onosproject/helmit/pkg/test.(*Worker).runTests.func1(0xc0002c2c60)
        	/Users/adibrastegarnia/go/pkg/mod/github.com/onosproject/[email protected]/pkg/test/worker.go:77 +0x5c
        testing.tRunner(0xc0002c2c60, 0xc000293c20)
        	/usr/local/go/src/testing/testing.go:991 +0xdc
        created by testing.(*T).Run
        	/usr/local/go/src/testing/testing.go:1042 +0x357
--- FAIL: gnmi (6.77s)

helmit look for a container that does not exist

This is a similar problem that we had before in onos-test. I am not sure if @ray-milkey fix it or not there but something that we should look into that. I just tested with benchmark command but I think it potentially can happen for other commands.


helmit bench ./cmd/kubernetes-benchmarks   --context ./charts  --benchmark BenchmarkMapPut --requests 1000
‣ 2020-03-31T17:22:38-07:00 kube-test Setup namespace
‣ 2020-03-31T17:22:38-07:00 kube-test Set up RBAC
✓ 2020-03-31T17:22:38-07:00 kube-test Set up RBAC
‣ 2020-03-31T17:22:38-07:00 star-mole Starting job
‣ 2020-03-31T17:22:38-07:00 star-mole Start job
✓ 2020-03-31T17:22:38-07:00 star-mole Start job
‣ 2020-03-31T17:22:40-07:00 star-mole Copy binary star-mole
✓ 2020-03-31T17:22:41-07:00 star-mole Copy binary star-mole
‣ 2020-03-31T17:22:41-07:00 star-mole Run binary star-mole
‣ 2020-03-31T17:22:41-07:00 star-mole Copy Helm context
✓ 2020-03-31T17:22:42-07:00 star-mole Copy Helm context
‣ 2020-03-31T17:22:42-07:00 star-mole Run job
✓ 2020-03-31T17:22:42-07:00 star-mole Starting job
‣ 2020-04-01T00:22:42Z star-mole-map Setup namespace
‣ 2020-04-01T00:22:42Z star-mole-map Set up RBAC
✓ 2020-04-01T00:22:42Z star-mole-map Set up RBAC
‣ 2020-04-01T00:22:42Z worker-0 Starting job
‣ 2020-04-01T00:22:42Z worker-0 Start job
✓ 2020-04-01T00:22:42Z worker-0 Start job
‣ 2020-04-01T00:22:44Z worker-0 Copy binary star-mole
✓ 2020-04-01T00:22:45Z worker-0 Copy binary star-mole
‣ 2020-04-01T00:22:45Z worker-0 Run binary star-mole
‣ 2020-04-01T00:22:45Z worker-0 Copy Helm context
✓ 2020-04-01T00:22:45Z worker-0 Copy Helm context
‣ 2020-04-01T00:22:45Z worker-0 Run job
✓ 2020-04-01T00:22:46Z worker-0 Starting job
‣ 2020-04-01T00:22:46Z map/0 SetupSuite map
2020/04/01 00:22:47 creating 1 resource(s)
2020/04/01 00:22:47 CRD clusters.cloud.atomix.io is already present. Skipping.
2020/04/01 00:22:47 creating 1 resource(s)
2020/04/01 00:22:47 CRD databases.cloud.atomix.io is already present. Skipping.
2020/04/01 00:22:47 creating 1 resource(s)
2020/04/01 00:22:47 CRD partitions.cloud.atomix.io is already present. Skipping.
2020/04/01 00:22:47 Clearing discovery cache
2020/04/01 00:22:47 beginning wait for 0 resources with timeout of 1m0s
2020/04/01 00:22:49 creating 5 resource(s)
2020/04/01 00:22:49 beginning wait for 5 resources with timeout of 0s
2020/04/01 00:22:51 Deployment is not ready: star-mole-map/atomix-controller. 0 out of 1 expected pods are ready
2020/04/01 00:22:53 Deployment is not ready: star-mole-map/atomix-controller. 0 out of 1 expected pods are ready
2020/04/01 00:22:55 Deployment is not ready: star-mole-map/atomix-controller. 0 out of 1 expected pods are ready
2020/04/01 00:22:57 Deployment is not ready: star-mole-map/atomix-controller. 0 out of 1 expected pods are ready
2020/04/01 00:22:59 Deployment is not ready: star-mole-map/atomix-controller. 0 out of 1 expected pods are ready
2020/04/01 00:23:01 Deployment is not ready: star-mole-map/atomix-controller. 0 out of 1 expected pods are ready
2020/04/01 00:23:03 creating 1 resource(s)
2020/04/01 00:23:03 beginning wait for 1 resources with timeout of 0s
‣ 2020-04-01T00:23:05Z star-mole-map Run benchmark BenchmarkMapPut
✓ 2020-04-01T00:23:05Z map/0 SetupSuite map
‣ 2020-04-01T00:23:05Z map/0 SetupWorker map
✓ 2020-04-01T00:23:05Z map/0 SetupWorker map
‣ 2020-04-01T00:23:05Z map/0 SetupBenchmark BenchmarkMapPut
✓ 2020-04-01T00:23:24Z map/0 SetupBenchmark BenchmarkMapPut
‣ 2020-04-01T00:23:24Z map/0 RunBenchmark BenchmarkMapPut
✓ 2020-04-01T00:23:55Z map/0 RunBenchmark BenchmarkMapPut
✓ 2020-04-01T00:23:55Z star-mole-map Run benchmark BenchmarkMapPut
BENCHMARK         REQUESTS   DURATION    THROUGHPUT       MEAN LATENCY   MEDIAN LATENCY   75% LATENCY   95% LATENCY   99% LATENCY
BenchmarkMapPut   1000       1.508972s   662.702820/sec   1.481565ms     1.338284ms       1.545552ms    2.167099ms    2.167099ms
‣ 2020-04-01T00:23:55Z star-mole-map Delete namespace star-mole-map
rpc error: code = Unknown desc = an error occurred when try to find container "809e7ac0c118216bac4be9385ee5223faa15cf53c9dc39528aed9519fdcf1ee1": does not exist
✓ 2020-04-01T00:24:24Z star-mole-map Delete namespace star-mole-map

not a valid chart repository or cannot be reached error

I put this error here that we can figure that out later what is going on.

test.go:65: test panicked: looks like "https://charts.atomix.io" is not a valid chart repository or cannot be reached: Get "https://charts.atomix.io/index.yaml": read tcp 10.244.0.6:57262->185.199.110.153:443: read: connection reset by peer

        goroutine 23 [running]:

        runtime/debug.Stack(0xc000587cd8, 0x1e3fce0, 0xc0006f86e0)

        	/home/travis/.gimme/versions/go1.14.7.linux.amd64/src/runtime/debug/stack.go:24 +0x9d

        github.com/onosproject/helmit/pkg/test.failTestOnPanic(0xc0001b1320)

        	/home/travis/gopath/pkg/mod/github.com/onosproject/[email protected]/pkg/test/test.go:65 +0x57

        panic(0x1e3fce0, 0xc0006f86e0)

        	/home/travis/.gimme/versions/go1.14.7.linux.amd64/src/runtime/panic.go:975 +0x3e3

        github.com/onosproject/helmit/pkg/test.RunTests(0xc0001b1320, 0x1dad5e0, 0x34dd5a0, 0x0, 0x0, 0x0)

        	/home/travis/gopath/pkg/mod/github.com/onosproject/[email protected]/pkg/test/test.go:91 +0x5ad

        github.com/onosproject/helmit/pkg/test.(*Worker).runTests.func1(0xc0001b1320)

        	/home/travis/gopath/pkg/mod/github.com/onosproject/[email protected]/pkg/test/worker.go:77 +0x5c

        testing.tRunner(0xc0001b1320, 0xc00012ace0)

        	/home/travis/.gimme/versions/go1.14.7.linux.amd64/src/testing/testing.go:1039 +0xdc

        created by testing.(*T).Run

        	/home/travis/.gimme/versions/go1.14.7.linux.amd64/src/testing/testing.go:1090 +0x372

Unavailable desc = transport is closing issue

I have seen this problem frequently when we run tests and specifically when we run it in Travis. Most of the times it happens when we delete the benchmark job. There are some issues around this topic which are related to helm.

‣ 2020-08-17T17:20:00Z flying-kit Finishing job

‣ 2020-08-17T17:20:00Z flying-kit Deleting job

✗ 2020-08-17T17:20:34Z flying-kit Deleting job

✗ 2020-08-17T17:20:34Z flying-kit Finishing job

Error: rpc error: code = Unavailable desc = transport is closing

rpc error: code = Unavailable desc = transport is closing

Explanations on helmit bench and helmit sim commands.

I am new to helmit and to running micro-onos tests using helnit. Now, I could run several basic helmit commands without errors. However, I need more helps on understanding the operations performed by the benchmark or simulation commands. For example, I could run helmit bench or helm sim commands as follows.

helmit bench ./examples/benchmark/cmd --suite atomix --benchmark BenchmarkMapPut --context examples/charts --duration 5m

helmit sim ./examples/simulation/cmd --context examples/charts --duration 5m --set atomix-raft.clusters=3 --set atomix-raft.partitions=9 --set atomix-raft.backend.replicas=3

The above commands just run forever without terminating even of a duration has been set. I also tried to see the pods or services produced by the above commands by using kubectl get pods and kubectl get svc. Nothing related to the helmit commands has been displayed in the kubectl output.

I also want to know the way of seeing the logs produced by the operations of the above helmit commands.

What is the difference of helmit bench and helmit sim?

Thanks for the helps in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.