Git Product home page Git Product logo

e2e's People

Contributors

bwplotka avatar clyang82 avatar douglascamata avatar giedriuss avatar jessicalins avatar matej-g avatar michahoffmann avatar philipgough avatar rasek91 avatar saswatamcode avatar squat avatar utukj avatar yeya24 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

e2e's Issues

Interrupting in standalone mode propagates to docker containers (?)

Repro:

  • make run-example
  • Ctrl+C

Logs (after interrupt):

^C14:48:03 Killing query-1
14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.676445174Z caller=main.go:167 msg="caught signal. Exiting." signal=interrupt
14:48:03 sidecar-2: level=warn name=sidecar-2 ts=2021-07-24T11:48:03.676527331Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.676541775Z caller=http.go:74 service=http/server component=sidecar msg="internal server is shutting down" err=null
14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.676619483Z caller=main.go:167 msg="caught signal. Exiting." signal=interrupt
14:48:03 sidecar-1: level=warn name=sidecar-1 ts=2021-07-24T11:48:03.676682445Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.676695729Z caller=http.go:74 service=http/server component=sidecar msg="internal server is shutting down" err=null
14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.677752224Z caller=http.go:93 service=http/server component=sidecar msg="internal server is shutdown gracefully" err=null
14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.677809395Z caller=intrumentation.go:66 msg="changing probe status" status=not-healthy reason=null
14:48:03 sidecar-2: level=warn name=sidecar-2 ts=2021-07-24T11:48:03.677847401Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.677857689Z caller=grpc.go:130 service=gRPC/server component=sidecar msg="internal server is shutting down" err=null
14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.677875199Z caller=grpc.go:143 service=gRPC/server component=sidecar msg="gracefully stopping internal server"
14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.677912421Z caller=http.go:93 service=http/server component=sidecar msg="internal server is shutdown gracefully" err=null
14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.677972702Z caller=intrumentation.go:66 msg="changing probe status" status=not-healthy reason=null
14:48:03 sidecar-1: level=warn name=sidecar-1 ts=2021-07-24T11:48:03.67801026Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.678022172Z caller=grpc.go:130 service=gRPC/server component=sidecar msg="internal server is shutting down" err=null
14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.678038023Z caller=grpc.go:143 service=gRPC/server component=sidecar msg="gracefully stopping internal server"
14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.678369251Z caller=grpc.go:156 service=gRPC/server component=sidecar msg="internal server is shutdown gracefully" err=null
14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.678437559Z caller=main.go:159 msg=exiting
14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.678758319Z caller=grpc.go:156 service=gRPC/server component=sidecar msg="internal server is shutdown gracefully" err=null
14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.678797963Z caller=main.go:159 msg=exiting
14:48:03 prometheus-1: level=warn ts=2021-07-24T11:48:03.695Z caller=main.go:653 msg="Received SIGTERM, exiting gracefully..."
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:676 msg="Stopping scrape discovery manager..."
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:690 msg="Stopping notify discovery manager..."
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:712 msg="Stopping scrape manager..."
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:686 msg="Notify discovery manager stopped"
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:672 msg="Scrape discovery manager stopped"
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:706 msg="Scrape manager stopped"
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=manager.go:934 component="rule manager" msg="Stopping rule manager..."
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=manager.go:944 component="rule manager" msg="Rule manager stopped"
14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.697417989Z caller=main.go:167 msg="caught signal. Exiting." signal=interrupt
14:48:03 query-1: level=warn name=query-1 ts=2021-07-24T11:48:03.697765153Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.697813969Z caller=http.go:74 service=http/server component=query msg="internal server is shutting down" err=null
14:48:03 prometheus-2: level=warn ts=2021-07-24T11:48:03.697Z caller=main.go:653 msg="Received SIGTERM, exiting gracefully..."
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:676 msg="Stopping scrape discovery manager..."
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:690 msg="Stopping notify discovery manager..."
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:712 msg="Stopping scrape manager..."
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:672 msg="Scrape discovery manager stopped"
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:686 msg="Notify discovery manager stopped"
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=manager.go:934 component="rule manager" msg="Stopping rule manager..."
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=manager.go:944 component="rule manager" msg="Rule manager stopped"
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:706 msg="Scrape manager stopped"
14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699077457Z caller=http.go:93 service=http/server component=query msg="internal server is shutdown gracefully" err=null
14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699157713Z caller=intrumentation.go:66 msg="changing probe status" status=not-healthy reason=null
14:48:03 query-1: level=warn name=query-1 ts=2021-07-24T11:48:03.699192767Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699204094Z caller=grpc.go:130 service=gRPC/server component=query msg="internal server is shutting down" err=null
14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699233377Z caller=grpc.go:143 service=gRPC/server component=query msg="gracefully stopping internal server"
14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699338349Z caller=grpc.go:156 service=gRPC/server component=query msg="internal server is shutdown gracefully" err=null
14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699371953Z caller=main.go:159 msg=exiting
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.703Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.703Z caller=main.go:885 msg="Notifier manager stopped"
14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.703Z caller=main.go:897 msg="See you next time!"
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.710Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.710Z caller=main.go:885 msg="Notifier manager stopped"
14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.711Z caller=main.go:897 msg="See you next time!"
14:48:04 Killing sidecar-2
14:48:04 Error response from daemon: Cannot kill container: e2e_example-sidecar-2: No such container: e2e_example-sidecar-2

14:48:04 Unable to kill service sidecar-2 : exit status 1
14:48:04 Killing prometheus-2
14:48:04 Error response from daemon: Cannot kill container: e2e_example-prometheus-2: No such container: e2e_example-prometheus-2

14:48:04 Unable to kill service prometheus-2 : exit status 1
14:48:04 Killing sidecar-1
14:48:04 Error response from daemon: Cannot kill container: e2e_example-sidecar-1: No such container: e2e_example-sidecar-1

14:48:04 Unable to kill service sidecar-1 : exit status 1
14:48:04 Killing prometheus-1
14:48:04 Error response from daemon: Cannot kill container: e2e_example-prometheus-1: No such container: e2e_example-prometheus-1

14:48:04 Unable to kill service prometheus-1 : exit status 1
2021/07/24 14:48:04 received signal interrupt
exit status 1
make: *** [Makefile:78: run-example] Interrupt

Remove `RunOnce`

Hm, so I create RunOnce API but actually I forgot I managed to solve my use case without this in https://github.com/thanos-io/thanos/blob/main/test/e2e/compatibility_test.go#L62

It's as easy as creating noop container and doing execs...

// Start noop promql-compliance-tester. See https://github.com/prometheus/compliance/tree/main/promql on how to build local docker image.
	compliance := e.Runnable("promql-compliance-tester").Init(e2e.StartOptions{
		Image:   "promql-compliance-tester:latest",
		Command: e2e.NewCommandWithoutEntrypoint("tail", "-f", "/dev/null"),
	})
	testutil.Ok(t, e2e.StartAndWaitReady(compliance))
// ...
		stdout, stderr, err := compliance.Exec(e2e.NewCommand("/promql-compliance-tester", "-config-file", filepath.Join(compliance.InternalDir(), "receive.yaml")))
		t.Log(stdout, stderr)
		testutil.Ok(t, err)
	})

I think we should kill RunOnce API to simplify all - and put the above into examples? ๐Ÿค”

cc @saswatamcode @philipgough @matej-g ?

BuildArgs should support repeating arguments

It is not uncommon that some programs support repeating arguments to provide multiple values, i.e. in following format:
example -p "first argument" -p "second one" -p "third one"

It is currently not possible to use BuildArgs to build arguments in such way, since it depends on using map[string]string, thus not allowing for repeated values.

Object does not exist error without pre pulling.

Sometimes I was getting weird errors of objects does not exist for new docker images. What always worked was:

  • I had to run e2e with WithVerbose option
  • copy docker run ... command for problematic image.
  • Run it manually locally

After that all runs 100% works.

Leaving here as a known issue to debug (: I suspect some permissions on my machines? ๐Ÿค” Let's see if other can repro!

Getting Dir & InternalDir mixed up - is there a better way?

Knowing when to use Dir & InternalDir is confusing and getting them mixed up can lead to file permission issues when your containers start up.

For example, when trying to create a dir called test in the container:

if err := os.MkdirAll(filepath.Join(demo.InternalDir(), "test"), os.ModePerm); err != nil {
		return e2e.NewErrInstrumentedRunnable(name, errors.Wrap(err, "create test dir failed"))
	}

leads to the following when run

   unexpected error: create logs dir failed: mkdir /shared: permission denied     

You receive that error message when the test is running & the containers have started up, so naturally you think that the error is coming from within the container, when in actual fact it is failing because the process can't create the /shared directory on your local machine.

Is there a better way of doing this? or preventing this kind of confusing error message from the caller's?

Matchers package cannot be used since it is internal

I would like to use the metrics option WithLabelMatchers, however I am unable to construct the matcher since the compiler will complain about this package being internal.

Is this intentional for some reason or just an oversight?

Permissions of DockerEnvironment.SharedDir()

I had several hours of confusion and difficulty because on my test machine the Docker instances received a /shared directory (holding /shared/config etc) with permissions rwxr-xr-x but on a CircleCI machine running a PR the Docker instances saw permissions rwx------ for /shared.

(This affects test containers that don't run as root.)

It is unclear to me if the problem is that only I am using Docker on a Mac, I am using Go 1.17, or I have a different umask than the CircleCI machine. I tried setting my umask to 000 but was unable to get my builds to fail the same way as the CircleCI builds.

monitoring: Add option to disable cadvisor

Cadvisor is important to get container metrics. However, it requires various directories, which might be different on different OS-es causing it to fail. For example when using WSL (google/cadvisor#2648 (comment)).

Without cadvisor we still can have a lot of metrics from runtimes running in containers (e.g Go app), so we can add option to disable cadvisor so users can be unblocked if they can't make cadvisor running.

Consider adding HTTPS readiness probe

On occasions, I'm using the framework to run services which are running only on HTTPS port (thus HTTP probe won't work). In such cases I tend to do a simple command readiness check by using curl --insecure ... https://<readiness-endpoint> or similar command. However, this has an overhead, since I have to 1) have a utility capable of probing available inside the container; 2) need to craft my own command with arguments each time.

It could be beneficial to have a HTTPS readiness probe, on similar principle (e.g. it could skip verifying TLS, which should be fine for purely testing purposes).

idea: Declarative K8s API as the API for docker env.

Just an idea: But it would be amazing to contain some service like e2e.Runnable or instrumented e2e.Runnable in a declarative, mutable state. Ideally, something that speaks a common language like K8s APIs. Then have docker engine supporting an important subset of K8s API for local use. There would be a few benefits to this:

  • We would be able to compose adjustments of e.g flags for different tests together better like Jsonnet allows (also adds huge cognitive load potentially!). The current approach has similar issues to https://github.com/bwplotka/mimic initial deployment at Improbable - the input for adjusting services is getting out of control (check ruler or querier helpers for e.g thanos-io/thanos#5348)
  • We could REUSE some Infrastructure as Go code (e.g. https://github.com/bwplotka/mimic) for both productions, staging, testing etc K8s clusters AS WELL AS local simplified e2e docker environments!

Minio is not ready even after `StartAndWaitReady` completes

Issue description

Trying to start Minio on the latest version of main, the server is not ready to handle requests, despite StartAndWaitReady being ran successfully already. Any immediate requests afterwards result in error response Server not initialized, please try again.

I suspect this could be an issue with the readiness probe upstream, since when setting up the same scenario with code version from before Minio image update in #4, everything is working correctly. However, I haven't confirmed the exact cause yet.

Minimal setup to reproduce

Run this test:

import (
	"context"
	"io/ioutil"
	"testing"

	"github.com/efficientgo/e2e"
        e2edb "github.com/efficientgo/e2e/db"
	"github.com/efficientgo/tools/core/pkg/testutil"
	"github.com/minio/minio-go/v7"
	"github.com/minio/minio-go/v7/pkg/credentials"
)

func TestMinio(t *testing.T) {
	e, err := e2e.NewDockerEnvironment("minio_test", e2e.WithVerbose())
	testutil.Ok(t, err)
	t.Cleanup(e.Close)

	const bucket = "minoiotest"
	m := e2edb.NewMinio(e, "minio", bucket)
	testutil.Ok(t, e2e.StartAndWaitReady(m))

	mc, err := minio.New(m.Endpoint("http"), &minio.Options{
		Creds: credentials.NewStaticV4(e2edb.MinioAccessKey, e2edb.MinioSecretKey, ""),
	})
	testutil.Ok(t, err)
	testutil.Ok(t, ioutil.WriteFile("test.txt", []byte("just a test"), 0755))

	_, err = mc.FPutObject(context.Background(), bucket, "obj", "./test.txt", minio.PutObjectOptions{})
	testutil.Ok(t, err)
}

Dependency on tools.git/core is for a detached commit; this breaks builds

https://github.com/efficientgo/e2e/blob/main/go.mod#L5-L6 says:

require (
	github.com/efficientgo/tools/core v0.0.0-20210129205121-421d0828c9a6

efficientgo/tools@421d0828c9a6 is a commit does not belong to any branch on this repository, and may belong to a fork outside of the repository and it seems I'm unable to build https://github.com/observatorium/obsctl because of this:

go: github.com/efficientgo/[email protected] requires
        github.com/efficientgo/tools/[email protected]: invalid version: unknown revision 421d0828c9a6
make: *** [Makefile:54: deps] Bล‚ฤ…d 1

Ref: thanos-io/thanos#4806

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.