Git Product home page Git Product logo

e2e-framework's Introduction

E2E Framework

godoc

A Go framework for end-to-end testing of components running in Kubernetes clusters.

The primary goal of this project is to provide a go test(able) framework that uses the native Go testing API to define end-to-end tests suites that can be used to test Kubernetes components. Some additional goals include:

  • Provide a sensible programmatic API to compose tests
  • Leverage Go's testing API to compose test suites
  • Expose packages that are easy to programmatically consume
  • Collection of helper functions that abstracts Client-Go functionalities
  • Rely on built-in Go test features to easily select/filter tests to run during execution
  • And more

For more detail, see the design document.

Getting started

The Go package is designed to be integrated directly in your test. Simply update your project to pull the desired Go modules:

go get sigs.k8s.io/e2e-framework/pkg/env
go get sigs.k8s.io/e2e-framework/klient

Using the framework

The framework uses the built-in Go testing framework directly to define and run tests.

Setup TestMain

Use function TestMain to define package-wide testing steps and configure behavior. The following examples uses pre-defined steps to create a KinD cluster before running any test in the package:

var (
	testenv env.Environment
)

func TestMain(m *testing.M) {
    testenv = env.New()
    kindClusterName := envconf.RandomName("my-cluster", 16)
    namespace := envconf.RandomName("myns", 16)

    // Use pre-defined environment funcs to create a kind cluster prior to test run
    testenv.Setup(
        envfuncs.CreateKindCluster(kindClusterName),
    )

    // Use pre-defined environment funcs to teardown kind cluster after tests
    testenv.Finish(
        envfuncs.DeleteNamespace(namespace),
    )

    // launch package tests
    os.Exit(testenv.Run(m))
}

Define a test function

Use a Go test function to define features to be tested as shown below:

func TestKubernetes(t *testing.T) {
    f1 := features.New("count pod").
        WithLabel("type", "pod-count").
        Assess("pods from kube-system", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
            var pods corev1.PodList
            err := cfg.Client().Resources("kube-system").List(context.TODO(), &pods)
            if err != nil {
                t.Fatal(err)
            }
            if len(pods.Items) == 0 {
                t.Fatal("no pods in namespace kube-system")
            }
            return ctx
        }).Feature()

    f2 := features.New("count namespaces").
        WithLabel("type", "ns-count").
        Assess("namespace exist", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
            var nspaces corev1.NamespaceList
            err := cfg.Client().Resources().List(context.TODO(), &nspaces)
            if err != nil {
                t.Fatal(err)
            }
            if len(nspaces.Items) == 1 {
                t.Fatal("no other namespace")
            }
            return ctx
        }).Feature()

    // test feature
    testenv.Test(t, f1, f2)
}

Running the test

Use the Go testing tooling to run the tests in the package as shown below. The following would run all tests except those with label type=ns-count:

go test ./package -args --skip-labels="type=ns-count"

Examples

See the ./examples directory for additional examples showing how to use the framework.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

e2e-framework's People

Contributors

andrewsykim avatar cartermckinnon avatar cpanato avatar crandles avatar dependabot[bot] avatar dmvolod avatar embano1 avatar fracasula avatar fricounet avatar harshanarayana avatar johnschnake avatar k8s-ci-robot avatar komalsukhani avatar maruina avatar matrus2 avatar maximilianbraun avatar mhofstetter avatar mitchmckenzie avatar nikhita avatar phisco avatar piotrkpc avatar pmalek avatar reetasingh avatar ronensc avatar ryankwilliams avatar shwethakumbla avatar tech-geek29 avatar v0lkc avatar vladimirvivien avatar wzshiming avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

e2e-framework's Issues

Context values are not propagated through Before and Setup steps

In the Before and Setup test steps I tried to return a new context with a value based on the parent context. For example:

	testenv.Setup(func(ctx context.Context) (context.Context, error) {
	        metadata := getMetadata(..)
	        ctx = context.WithValue(ctx, "some-metadata", metadata)
		return ctx, nil
	})

When I then try to access the context value from the Assess step, the value is not available. I was wondering if context values are expected to be propagated throughout the various test steps.

Support namespace creation/deletion per test case

The upstream Kubernetse e2e tests follow the pattern of using a dedicated namespace per test case. This helps with resource clean up and isolation. It would be great if the e2e framework provided this mechanism automatically or via opt-in. Perhaps a single testenv or a feature of a given testenv provides an isolated namespace with a generated name.

Simplify how Environment is created

Now that the klient package has landed, the way that an environment is created should be revisited. The environment type should use a klient value to create/keep track of internal *rest.Client value.

  • Update doc with new approach
  • Implement changes

Logging Infra

Opening this issue with reference to the discussion from #73 (comment)

In order to ease the controllability of the logging format and level across the framework, we need a way to control the logging at the framework level that can be inherited into each component of the framework.

It would be great if the verbosity could be controlled while running tests like klog does by setting the --v n flag.

AfterXXX hooks should have access to the object in question

For example, AfterFeature currently provides a hook that knows about the env but not the feature itself.

In hooks like this in the k/k framework, this is where we would tie into for things like custom logging solutions and even the Sonobuoy progress updates.

I think the signatures should be modified to have more info available to the caller. Even in BeforeXXX it would be reasonable to want to do something that might involve knowing about the name of the test/feature or some assertion it is going to make.

Introduce `env.Environment.BeforeEachFeature` and `env.Environment.AfterEachFeature`

After #36, it would be nice to introduce a pre- and post- operation hooks for feature tests named BeforeEachFeature and AfterEachFeature. These lifecycle hooks would get executed in the order shown

- env.Environment.Setup
- <TestFunction>
  - env.Environment.BeforeEachTest
  - env.Test(env.Environment.BeforeEachFeature <feature> env.Environment.AfterEachFeature)
  - env.Environment.AfterEachTest
- env.Environment.Finish

Feature Builder doesn't support adding Step by name

The feature builder has a nice set of API exposed to be able to build setup and tear down methods. But they auto generate the name of the step that is being run.

fmt.Sprintf("%s-setup", b.feat.name)

However, this can be a bit of a problem to debug things. There is a really useful builder method called WithStep that can let you customize the name of the step. However, the argument used level used for this is coming from pkg/internal/types/level.go

// WithStep adds a new step that will be applied prior to feature test.
func (b *FeatureBuilder) WithStep(name string, level Level, fn Func) *FeatureBuilder {
	b.feat.steps = append(b.feat.steps, newStep(name, level, fn))
	return b
}

Which means we can't really use that or invoke it from anywhere outside. Which means the examples can't use that either or anything that injects e2e-framework as a dependency. Does it make sense to move some bits from pkg/internal/types to a reusable package ?

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Get Labels into examples

Lets add the Label functionality into examples, i think
they arent being used in the filters properly right now...

Upgrade controller-runtime to latest version

I noticed that the version of controller runtime used is v0.9.0

Is it possible to bump this to a latter version of controller runtime? It seems that v0.9.0 of controller runtime uses an older version of spf13/cobra which has a few security vulnerabilities (which aren't actually ever executed)

Happy to open a PR! Thanks much!

More documentation

Proper documentation is needed to walk user/adopter through:

  • Getting started
  • Different test suite scenarios (starting with kubeconfig, generate kubeconfig, using kubetest2, etc)
  • Testing built-in objects
  • Testing custom resources
  • Mixing with other framework
  • Etc

Add design docs to repo

The original design doc that started this project is still in Google doc. Since the design is evolving fast, the doc should be placed here to keep pace with the changes.

k8s resource watcher implementation

K8S object watchers are great functionality provided by k8s to get efficient change notifications on resources.
The events supported by these watchers are

  1. ADD
  2. MODIFY/UPDATE
  3. DELETE

The idea here is to make developer implementation easier. Without knowing the resource core type of k8s objects they have to just register their actions/functions for respective watch events using the provision provided by this framework and to stay informed about when these events gets triggered, just use Watch() which resides inside klient/k8s/resources package.

Proposal

Watch function accepts a object ObjectList as an argument. ObjectList type is used to inject the resource type in which Watch has to be applied.

klient/k8s/resources/resources.go

import (
    "sigs.k8s.io/controller-runtime/pkg/client"
    "k8s.io/apimachinery/pkg/watch"
)

func (r *Resources) Watch(ctx context.Context,object client.ObjectList, opts client.ListOptions) watch.Interface {
    cl, err := client.NewWithWatch(cfg, client.Options{})
    if err != nil {
        log.Println("error while creating a watcher client", err)
        return
    }
    
    watcher, err := cl.Watch(ctx, object, &ops) 
     if err != nil {
        log.Println("error while creating a watcher client", err)
        return nil, err
    }
    
    return watcher
}

Watch() in resources.go will return the watcher type which helps to call InvokeEventHandler(). InvokeEventHandler accepts EventHandlerFuncs which carries the user registered function sets.

file : klient/k8s/resources/watch.go

// InvokeEventHandler triggers the registered methods based on the event received for particular k8s resources.
func (watcher watch.Interface)InvokeEventHandler(f EventHandlerFuncs{}) {
    for {
	select {
	case event, ok := <-watcher.ResultChan():
            // retrieve the event type
            eventType := event.Type
        
            switch eventType {
            case watch.Added:
                f.Add(event.Object)
            case watch.Modified:
                f.Update(event.Object)
            case watch.Deleted:
                f.Delete(event.Object)
            }
        } 
        
    }
}


type EventHandlerFuncs struct {
	AddFunc    func(obj interface{})
	UpdateFunc func(obj interface{})
	DeleteFunc func(obj interface{})
}

func (e EventHandlerFuncs) Add(obj interface{}) {
	...
}

func (e EventHandlerFuncs) Update(newObj interface{}) {
	...
}


func (e EventHandlerFuncs) Delete(obj interface{}) {
	...
}

Support Helm chart

As a test writer, I should be able to deploy helm charts.

Design

The framework should provide a type that allows test writers to install helm charts

type Helm []struct {
	Name                      string
	Namespace            string
        ReleaseName         string
        Version                    string
}

Type Helm should also include a method to install the helm chart

Example

func TestMain(m *testing.M){
	testenv = env.New()
	kindClusterName := envconf.RandomName("kind-with-config", 16)
	namespace := envconf.RandomName("kind-ns", 16)

	testenv.Setup(
		envfuncs.CreateKindClusterWithConfig(kindClusterName, "kindest/node:v1.22.2", "kind-config.yaml"),
		envfuncs.CreateNamespace(namespace),
	)

	testenv.Finish(
		envfuncs.DeleteNamespace(namespace),
		envfuncs.DestroyKindCluster(kindClusterName),
	)
	os.Exit(testenv.Run(m))
}

func TestHelmChart(t *testing.T) {
           helmInfo: = {
			Name: "nginx",
                         Namespace: "default",
                         ReleaseName: "nginx-stable/nginx-ingress"
                         Version: "latest"
                 }
	tests := features.New("Setup Helm Chart").
                      SetupFromHelm(func(helmInfo Helm))
                      .Feature()
		
	test.Test(t, tests )
}

BuildFromJSON function support

Json input encoding decoding implementation needs a think through to support any kind of k8s object structure parsing.

Support skipping tests by labels

I would like to skip a set of feature tests based on the labels they have.

Currently the framework supports a --labels flag which allows running a subset of tests by label. Maybe a similar flag like --skip-labels might be the right approach.

Rename the master branch to main

This project is in the start phase, so it is good to rename the branch from master to main right away 😄

I can do that, but don't have the admin permission to do so.

@spiffxp would you mind doing this? i think you have the right permissions

cc @vladimirvivien

/kind cleanup

Flags support seems broken when test binary generated

The framework is designed to work with or without flags. As such flags are not parsed early to avoid forcing test writers to deal with unnecessary CLI flag parsing. However, it seems that flags are broken when binary is generated.

go test -c -o test.test .

When the binary is executed, it generated an error:

./test.test --kubeconfig /Users/vivienv/.kube/config
 no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Wait returns exception without tearing down the KinD cluster if time-out occurs

A goroutine occurs while using wait.For(conditions.New(client.Resources()).ResourceMatch(&resultDeployment, func(object k8s.Object) bool {}
Also, the teardown in the feature step and Finish in the main step didn't execute when this error happened.

The function I tried to run:

func TestDeployment(t *testing.T) {
	deploymentFeat := features.New("Test").
		Setup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
			deployment := newDeployment()
			<Logic>
			return ctx
		}).
		Assess("Pods successfully deployed", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
			client, err := cfg.NewClient()
			if err != nil {
				t.Error("Failed to create new client", err)
			}
			resultDeployment := appsv1.Deployment{
				ObjectMeta: metav1.ObjectMeta{Name: "deployment-test", Namespace: cfg.Namespace()},
			}

			if err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&resultDeployment, appsv1.DeploymentAvailable, corev1.ConditionTrue),
				wait.WithTimeout(time.Minute*2)); err != nil {
				t.Error("deployment not found", err)
			}

			if err := wait.For(conditions.New(client.Resources()).ResourceMatch(&resultDeployment, func(object k8s.Object) bool {
				<Logic>
				return true
			}), wait.WithTimeout(time.Minute*4)); err != nil {
				t.Error("error", err)
			}
			return context.WithValue(ctx, "deployment-test", &resultDeployment)
		}).
		Teardown(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
			client, err := cfg.NewClient()
			if err != nil {
				t.Error("failed to create new Client", err)
			}
			dep := ctx.Value("deployment-test").(*appsv1.Deployment)
			if err := client.Resources().Delete(ctx, dep); err != nil {
				t.Error("failed to delete deployment", err)
			}
			return ctx
		}).Feature()
	testenv.Test(t, deploymentFeat)
}

Error: Uploaded here

Type: Bug
Version: v0.5
KinD version: kind v0.11.1 go1.16.4 darwin/amd64

Integrate klient in Test Env type

When type Environment was introduced, package klient did not exist. Now that we have the helper package, the Environment type should be updated to use it directly instead.

Question about test package layout

The examples in the repository only use one package (main) to contain main_test and the go files containing the tests.
What would be the suggested approach to use multiple packages?

from

suites
│    featureset_test.go
│    filter_test.go
│    hello_test.go
│    main_test.go

to

suites
│   main_test.go
│   hello_test.go 
│
└───somepackage
│   │   featureset_test.go
│   │   ...
│
└───other
│       │   filter_test.go
│       │   ...

In godog this is explicit, using InitializeScenario

Add more examples

We should add more examples showing:

  • How to create/manage multiple API server client connections
  • How to use klient package stand-alone tests (no harness frameworks)
  • How to use test framework with Testify assertion framework
  • How to use test framework with Gomega assertion framework
  • How to use klient package with another test harness framework like Ginkgo (maybe, ?)
  • How to do table-driven testing with the test harness framework
  • How to use the test framework with the kubetest2 framework (what can be done, ?)
  • And more

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Change `env.Environment.BeforeTest` to `env.Environment.BeforeEachTest`

Currently, the env.Environment method that registers pre-test callbacks is named BeforeTest. That name could be more descriptive to indicate the frequency at which that those operations, registered with the method, are executed during test execution.

By changing the name to BeforeEachTest test writers can clearly deduce when the test is executed during framework execution as shown in the following steps:

- env.Environment.Setup
- TestFunction
  - env.Environment.BeforeEachTest
  - env.Test(feature)
  - env.Environment.AfterEachTest
- env.Environment.Finish

Execute test features in parallel

Currently all tests are executed serially. This issue is to request that the framework also support parallel test execution.

This originated from #17

Feature testing

A feature is considered to be a unit of testable logic in the code. As such, the step functions, that make up a feature, should always be executed in serial as a unit. This guarantees predictable execution flow and predictable context propagation.

TestFunc(t *testing.T){
    f := feature.New("my feature").Assess("check1", ...).Assess("check2"...).Feature()
    env.Test(f)
}

In the previous snippet, assessments "check1" and "check2" will be executed serially as part of the feature.

Concurrent feature testing

A test function with multiple features, however, should be able to be exercised concurrently as shown below:

    f0 := feature.New("my feature").Assess("check1", ...).Assess("check2"...).Feature()
    f1 := feature.New("my feature").Assess("check3", ...).Assess("check4"...).Feature()
    f2 := feature.New("my feature").Assess("check5", ...).Assess("check6"...).Feature()

    env.TestInParallel(f0, f1, f2)

Features f0, f1, f2 should be executed concurrently.

Note env.Test(f) and env.TestInParallel(f) should be equivalent.

Forcing concurrent execution

It is convenient to be able to force tests to be executed concurrently. The code should support the ability to execute all tests in a package concurrently using an environment configuration.

This can be done programmatically as shown below:

TestMain(m *testing.M){
    env  := env.NewWithConfig(envconfig.New().WithParallelTestEnabled())
}

Or driven by the --parallel flag, by creating the environment configuration from CLI flags as shown:

TestMain(m *testing.M){
    env  := env.NewWithConfig(envconfig.NewFromFlags())
}

Then, tests are executed as

go test ./package -args --parallel

When the configured for parallel testing env.Test(f0, f1, f2) and env.TestInParallel(f0, f1, f2) are equivalent

Dry run

Supply a dry-run flag so we can check which tests would be run given the current flags.

Avoid sharing *testing.T during feature execution context

Currently when Environment.Test(...) is called from with a test function as shown below:

func TestSomething(t *testing.T) {
    feature.New().Assess(t, ...)
    env.Test(t, ...)
}

Only one instance of t passed around from feature to feature. This can cause issues such as early termination of feature test. A better approach is to create a new *testing.T for each feature. That way if a feature fails, the rest of the feature execution continues (within the same Test function).

Support kubectl most used commands

As a test writer, I should be able to run kubectl command i.e. apply, exec, log,...

Design

The framework should provide a way to be able to run YAML files to deploy different k8s types (deployment, etc...)

Example

KubectlApply(cfg.KubeconfigFile(), "namespace", "-f","./filepath.yaml")

What does this repo aim at?

  • A replacement of k8s.io/kubernetes/test/e2e/framework
  • An operator test framework
  • A generic e2e framework
  • Others

I'm finding a e2e framework to avoid importing k8s.io/kubernetes in my project.

BTW: status of this repo? any future plan?

FeatureInfo issues

FeatureInfo was added so that we could us the featureinfo in the before/after hooks.

This was all well and good but the examples happened to not test the before/after features and only did the before/after tests as an integration level.

The result of this was that we didn't realize that the FeatureInfo type was put under the internal package. This means when someone tries to actually use these beforeeachfeature hooks, the type can only be referenced as an interface{} since it isnt exported (since its in the internal pkg).

However, moving it into the intuitive pkg/feature location causes an import cycle.

Lastly, in trying to work with this I wanted the After hook to know about the result of the test. Did it run/pass/get skipped? If this information can be added to the FeatureInfo object that would be wonderful.

Expect permutations

Would be cool if we could do pytest style xfails or ExpectFails of some sort , i.e.

  1. Write a test
  2. be able to specify that you expect it to fail at runtime
  3. pass a test suite even though some failures happened because of inputs to (2)

Introduce support for Waiting for given cluster conditions

As a test writer of components running on Kubernetes, it would be extremely useful to have the capability to wait for one or more cluster conditions before proceeding in the test.

A framework for Waiting

A new package klient/wait could be the starting point for an API to express conditions for waiting by leveraging the wait package in API machinery "k8s.io/apimachinery/pkg/util/wait"):

package wait

func For(cond func() (bool, error)){...}

So, a test writer may have condition as follows:

func TestSomething(t *testing.T){
    wait.For(t, func() (bool, error) {
        var ns coreV1.Namespace
        cfg.client.List(ns)
        ...
    )}
}

Pre-defined conditions

The framework could have a collection of pre-defined conditions that can be used by test writers:

func TestSomething(t *testing.T) {
   var pod coreV1.Pod
   wait.For(PodReadyCondition(pod))
}

Arguments not recognized

We are using E2E 0.0.4 on https://github.com/K8sbykeshed/k8s-service-lb-validator

I'm using go test ./... -args --skip-labels="type=cluster_ip" to skip tests with this label, but receiving the following error:

{"level":"info","ts":1637161494.262519,"caller":"matrix/manager.go:227","msg":"Server is ready","case":"81->80,TCP"}
flag provided but not defined: -skip-labels
Usage of /tmp/go-build1250012764/b001/k8s-service-lb-validator.test:
  -test.bench regexp
    	run only benchmarks matching regexp
  -test.benchmem
    	print memory allocations for benchmarks
  -test.benchtime d
    	run each benchmark for duration d (default 1s)
  -test.blockprofile file
    	write a goroutine blocking profile to file
  -test.blockprofilerate rate
    	set blocking profile rate (see runtime.SetBlockProfileRate) (default 1)
  -test.count n
    	run tests and benchmarks n times (default 1)
  -test.coverprofile file
    	write a coverage profile to file
  -test.cpu list

Wondering if some wrong bootstrap in our codebase.

structured data output

@perithompson would like to dump YAML From table tests such as the https://github.com/K8sbykeshed/k8s-service-lb-validator/ frameowrk, could we do something like:

Taking a struct such as this, it would be nice to directly put it into test output as a readable/exported entity

// Reachability packages the data for a cluster-wide connectivity probe
type Reachability struct {
	Expected []string
	Observed []string
	Pods     []*Pod
}

... psuedo code made with @vladimirvivien @jackielii


type MyYAML struct {}

// making a new programming language uup again for Jaice to parse
func (Writer* y) MyYAML(interface{} tableOutput) string {
    for blah,_ := tableOutput (*MyTable) {
        s += result + "\n"
    }
    return s
}

func myTest() {
  e := env.NewWithConfig(envconf.New())
	  feat := features.New("Hello Feature").
		  WithLabel("type", "simple").
		  Assess("test message", func(ctx context.Context, t *testing.T, _ *envconf.Config) context.Context {
			  result := Hello("foo")
			  if result != "Hello foo" {
				  t.Error("unexpected message")
			  }
			  return ctx
		  })
  
	  e.WithOutputWriter(Test(t, feat.Feature()), MyYAML())
}



Change `env.Environment.Test` to execute feature sets

Currently, method env.Environment.Test can only test one feature at a time:

func TestFunction(_ *testing.T) {
    testenv := env.New()
    f := feature.New(...).Assess(...)
    testenv.Test(f.Feature())
}

As a test writer, I would like for env.Environment.Test to be able to test a feature set consisting of one or more features as shown:

func TestFunction(_ *testing.T) {
    testenv := env.New()
    f0 := feature.New(...).Assess(...)
    f1 := feature.New(...).Assess(...).Assess(...)
    f2 := feature.New(...).Assess(...).Teardown(...)
    testenv.Test(f0.Feature(), f1.Feature(), f2.Feature())
}

support running all tests in parallel

the k/k framework uses ginkgo which has parallel mode (-p) for tests that were not explicitly defined as "parallel". this allows running the whole suite in parallel with N workers.

while tests in this framework can use t.Parallel() to opt-in into parallel execution, perhaps it would be possible to override that and run all tests in parallel, unless certain tests are "serial only".

the k/k framework does have have the [Serial] tag for that and users of the parallel mode can skip them with -skip.

Table-driven tests

While creating example tests, I came to the realization that using table-driven tests would be useful to be used with the framework. While the framework provides a nice and structured way of encapsulating assessments, it turns out that a table-driven versions of would be more useful when testing multiple cases of the same functionality.

As a test writer, I should be able to represent my assessments as a series of tests that are driven using Go's table-driven test convention.

Design

The framework should provide a type that allows test writers to capture assessments in a repeatable structure as follows:

type Table []struct {
	Name        string
	Description string
	Labels      Labels
	Assessment  Func
}

Type Table should also include a method that automatically gathers the assessment data and construct the proper e2e feature tests as shown below:

env.Test(t, table.Features()...) 

Example

func TestMain(m *testing.M){
	test.Setup(func(ctx context.Context, config *envconf.Config) (context.Context, error) {
		rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
		return context.WithValue(context.WithValue(ctx, "limit", rand.Int31n(255)), "randsrc", rnd), nil
	})

	os.Exit(test.Run(m))
}

func TestTableDriven(t *testing.T) {
	tests := features.Table{
		{
			Name: "less than equal 64",
			Assessment: func(ctx context.Context, t *testing.T, config *envconf.Config) context.Context {
				rnd := ctx.Value("randsrc").(*rand.Rand)  // in real test, check asserted type
				lim := ctx.Value("limit").(int32) // check type assertion
				if rnd.Int31n(int32(lim)) > 64 {
					t.Error("limit should be less than 64")
				}
				return ctx
			},
		},
		{
			Name: "more than than equal 128",
			Assessment: func(ctx context.Context, t *testing.T, config *envconf.Config) context.Context {
				rnd := ctx.Value("randsrc").(*rand.Rand)  // in real test, check asserted type
				lim := ctx.Value("limit").(int32) // check type assertion
				if rnd.Int31n(int32(lim)) > 128 {
					t.Error("limit should be less than 128")
				}
				return ctx
			},
		},
	}

	test.Test(t, tests.Features()...)
}

Provide a `envconfig.Config` method to inject a client from kubeconfig file.

The example here reveals the multi-steps necessary to create a klient.Client from a kubeconfig file and then inject that client into the environment's configuration.

func(ctx context.Context, cfg *envconf.Config) (context.Context, error) {

It would be nice if the envconf.Config type provided a method to handle the injection of the client in one step similar to:

func TestMain(m *testing.M){
    testenv.Setup(
        func(ctx context.Context, cfg *envconf.Config) (context.Context, error) { 
            cfg.WithKubeconfigFile(kubeconf_file_path) // this would create a new klient.Client, then inject it in the cfg
        },
    )
    ...
}

New env should default to in-cluster config

When doing a simple API call example I had to do the following:

	c,err := envconf.NewWithKubeconfig("")
	if err !=nil{
		t.Fatalf("Failed to get in-cluster config: %v", err)
	}
	e := env.NewWithConfig(c)

I would have expected e := env.New() to use a default in-cluster config or something but I got a panic.

If I did something wrong let me know, but otherwise I think a tweak should be made to avoid those 4 extra lines of boilerplate, in-cluster-config setup.

Create example with per-test namespaces

Talked about in TGIK 170; each test often needs its own namespace but its not completely trivial where/how to define it in order for each test to have a unique NS and them to be cleaned up appropriately.

Provide a flag to skip running the Finish function

In order to debug the failed tests, sometimes there is a need to skip running the finish function.

for example, create a cluster in the setup function and delete it in the finish function.
If any of the tests are failing and run logs are not sufficient to root cause the issue, we may need to skip running the finish function so that one can log in to the live problematic cluster and troubleshoot the issue.

Let me know if any further details are required.

have a summary on Environment object

These default tests logs is not much helpful, it prints all tests and when you add logs gets very annoying to understand whats going on, for example:

--- PASS: TestClusterIP (4.14s)
    --- PASS: TestClusterIP/Cluster_IP (0.11s)
        --- PASS: TestClusterIP/Cluster_IP/the_cluster_ip_should_be_reachable. (0.11s)
=== RUN   TestNodePort
=== RUN   TestNodePort/Node_Port
=== RUN   TestNodePort/Node_Port/the_host_should_reachable_on_node_port
--- PASS: TestNodePort (0.00s)
    --- PASS: TestNodePort/Node_Port (0.00s)
        --- PASS: TestNodePort/Node_Port/the_host_should_reachable_on_node_port (0.00s)
=== RUN   TestExternalService
=== RUN   TestExternalService/External_Service
=== RUN   TestExternalService/External_Service/the_external_DNS_should_be_reachable_via_local_service
--- PASS: TestExternalService (0.00s)
    --- PASS: TestExternalService/External_Service (0.00s)
        --- PASS: TestExternalService/External_Service/the_external_DNS_should_be_reachable_via_local_service (0.00s)

On the other end Ginkgo, has a dump summary of my tests, something like which tests were running/failed/passed, and it's not needed to check the console for process exit status of grep around the log, an example of one used on k/k e2e:

I0522 22:23:31.910] �[1m�[32mRan 435 of 5765 Specs in 1549.428 seconds�[0m
I0522 22:23:31.910] �[1m�[32mSUCCESS!�[0m -- �[32m�[1m435 Passed�[0m | �[91m�[1m0 Failed�[0m | �[33m�[1m0 Pending�[0m | �[36m�[1m5330 Skipped�[0m
I0522 22:23:31.919] 
I0522 22:23:31.920] 
I0522 22:23:31.920] Ginkgo ran 1 suite in 25m55.922452206s
I0522 22:23:31.920] Test Suite Passed
I0522 22:23:31.929] Checking for custom logdump instances, if any

Somehow this could be added to the configuration of the env maybe? the testEnv struct could have another flag like summary bool.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.