kubernetes-sigs / e2e-framework Goto Github PK
View Code? Open in Web Editor NEWA Go framework for end-to-end testing of components running in Kubernetes clusters.
License: Apache License 2.0
A Go framework for end-to-end testing of components running in Kubernetes clusters.
License: Apache License 2.0
Opening this issue with reference to the discussion from #73 (comment)
In order to ease the controllability of the logging format and level across the framework, we need a way to control the logging at the framework level that can be inherited into each component of the framework.
It would be great if the verbosity could be controlled while running tests like klog
does by setting the --v n
flag.
While creating example tests, I came to the realization that using table-driven tests would be useful to be used with the framework. While the framework provides a nice and structured way of encapsulating assessments, it turns out that a table-driven versions of would be more useful when testing multiple cases of the same functionality.
As a test writer, I should be able to represent my assessments as a series of tests that are driven using Go's table-driven test convention.
The framework should provide a type that allows test writers to capture assessments in a repeatable structure as follows:
type Table []struct {
Name string
Description string
Labels Labels
Assessment Func
}
Type Table
should also include a method that automatically gathers the assessment data and construct the proper e2e feature tests as shown below:
env.Test(t, table.Features()...)
func TestMain(m *testing.M){
test.Setup(func(ctx context.Context, config *envconf.Config) (context.Context, error) {
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
return context.WithValue(context.WithValue(ctx, "limit", rand.Int31n(255)), "randsrc", rnd), nil
})
os.Exit(test.Run(m))
}
func TestTableDriven(t *testing.T) {
tests := features.Table{
{
Name: "less than equal 64",
Assessment: func(ctx context.Context, t *testing.T, config *envconf.Config) context.Context {
rnd := ctx.Value("randsrc").(*rand.Rand) // in real test, check asserted type
lim := ctx.Value("limit").(int32) // check type assertion
if rnd.Int31n(int32(lim)) > 64 {
t.Error("limit should be less than 64")
}
return ctx
},
},
{
Name: "more than than equal 128",
Assessment: func(ctx context.Context, t *testing.T, config *envconf.Config) context.Context {
rnd := ctx.Value("randsrc").(*rand.Rand) // in real test, check asserted type
lim := ctx.Value("limit").(int32) // check type assertion
if rnd.Int31n(int32(lim)) > 128 {
t.Error("limit should be less than 128")
}
return ctx
},
},
}
test.Test(t, tests.Features()...)
}
This project is in the start phase, so it is good to rename the branch from master
to main
right away 😄
I can do that, but don't have the admin permission to do so.
@spiffxp would you mind doing this? i think you have the right permissions
/kind cleanup
The documentation for the klient
design does not show code color highlight in GitHub markdown.
FeatureInfo was added so that we could us the featureinfo in the before/after hooks.
This was all well and good but the examples happened to not test the before/after features and only did the before/after tests as an integration level.
The result of this was that we didn't realize that the FeatureInfo type was put under the internal
package. This means when someone tries to actually use these beforeeachfeature hooks, the type can only be referenced as an interface{}
since it isnt exported (since its in the internal pkg).
However, moving it into the intuitive pkg/feature location causes an import cycle.
Lastly, in trying to work with this I wanted the After hook to know about the result of the test. Did it run/pass/get skipped? If this information can be added to the FeatureInfo object that would be wonderful.
In the Before and Setup test steps I tried to return a new context with a value based on the parent context. For example:
testenv.Setup(func(ctx context.Context) (context.Context, error) {
metadata := getMetadata(..)
ctx = context.WithValue(ctx, "some-metadata", metadata)
return ctx, nil
})
When I then try to access the context value from the Assess step, the value is not available. I was wondering if context values are expected to be propagated throughout the various test steps.
A goroutine occurs while using wait.For(conditions.New(client.Resources()).ResourceMatch(&resultDeployment, func(object k8s.Object) bool {}
Also, the teardown
in the feature step and Finish
in the main step didn't execute when this error happened.
The function I tried to run:
func TestDeployment(t *testing.T) {
deploymentFeat := features.New("Test").
Setup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
deployment := newDeployment()
<Logic>
return ctx
}).
Assess("Pods successfully deployed", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
client, err := cfg.NewClient()
if err != nil {
t.Error("Failed to create new client", err)
}
resultDeployment := appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{Name: "deployment-test", Namespace: cfg.Namespace()},
}
if err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&resultDeployment, appsv1.DeploymentAvailable, corev1.ConditionTrue),
wait.WithTimeout(time.Minute*2)); err != nil {
t.Error("deployment not found", err)
}
if err := wait.For(conditions.New(client.Resources()).ResourceMatch(&resultDeployment, func(object k8s.Object) bool {
<Logic>
return true
}), wait.WithTimeout(time.Minute*4)); err != nil {
t.Error("error", err)
}
return context.WithValue(ctx, "deployment-test", &resultDeployment)
}).
Teardown(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
client, err := cfg.NewClient()
if err != nil {
t.Error("failed to create new Client", err)
}
dep := ctx.Value("deployment-test").(*appsv1.Deployment)
if err := client.Resources().Delete(ctx, dep); err != nil {
t.Error("failed to delete deployment", err)
}
return ctx
}).Feature()
testenv.Test(t, deploymentFeat)
}
Error: Uploaded here
Type: Bug
Version: v0.5
KinD version: kind v0.11.1 go1.16.4 darwin/amd64
As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.
The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".
Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)
Thanks so much, let me know if you have any questions.
(This issue was generated from a tool, apologies for any weirdness.)
[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md
The framework is designed to work with or without flags. As such flags are not parsed early to avoid forcing test writers to deal with unnecessary CLI flag parsing. However, it seems that flags are broken when binary is generated.
go test -c -o test.test .
When the binary is executed, it generated an error:
./test.test --kubeconfig /Users/vivienv/.kube/config
no configuration has been provided, try setting KUBERNETES_MASTER environment variable
The original design doc that started this project is still in Google doc. Since the design is evolving fast, the doc should be placed here to keep pace with the changes.
When doing a simple API call example I had to do the following:
c,err := envconf.NewWithKubeconfig("")
if err !=nil{
t.Fatalf("Failed to get in-cluster config: %v", err)
}
e := env.NewWithConfig(c)
I would have expected e := env.New()
to use a default in-cluster config or something but I got a panic.
If I did something wrong let me know, but otherwise I think a tweak should be made to avoid those 4 extra lines of boilerplate, in-cluster-config setup.
I would like to skip a set of feature tests based on the labels they have.
Currently the framework supports a --labels
flag which allows running a subset of tests by label. Maybe a similar flag like --skip-labels
might be the right approach.
Currently all tests are executed serially. This issue is to request that the framework also support parallel test execution.
This originated from #17
A feature is considered to be a unit of testable logic in the code. As such, the step functions, that make up a feature, should always be executed in serial as a unit. This guarantees predictable execution flow and predictable context propagation.
TestFunc(t *testing.T){
f := feature.New("my feature").Assess("check1", ...).Assess("check2"...).Feature()
env.Test(f)
}
In the previous snippet, assessments "check1" and "check2" will be executed serially as part of the feature.
A test function with multiple features, however, should be able to be exercised concurrently as shown below:
f0 := feature.New("my feature").Assess("check1", ...).Assess("check2"...).Feature()
f1 := feature.New("my feature").Assess("check3", ...).Assess("check4"...).Feature()
f2 := feature.New("my feature").Assess("check5", ...).Assess("check6"...).Feature()
env.TestInParallel(f0, f1, f2)
Features f0, f1, f2 should be executed concurrently.
Note
env.Test(f)
andenv.TestInParallel(f)
should be equivalent.
It is convenient to be able to force tests to be executed concurrently. The code should support the ability to execute all tests in a package concurrently using an environment configuration.
This can be done programmatically as shown below:
TestMain(m *testing.M){
env := env.NewWithConfig(envconfig.New().WithParallelTestEnabled())
}
Or driven by the --parallel
flag, by creating the environment configuration from CLI flags as shown:
TestMain(m *testing.M){
env := env.NewWithConfig(envconfig.NewFromFlags())
}
Then, tests are executed as
go test ./package -args --parallel
When the configured for parallel testing
env.Test(f0, f1, f2)
andenv.TestInParallel(f0, f1, f2)
are equivalent
As a test writer of components running on Kubernetes, it would be extremely useful to have the capability to wait for one or more cluster conditions before proceeding in the test.
A new package klient/wait
could be the starting point for an API to express conditions for waiting by leveraging the wait
package in API machinery "k8s.io/apimachinery/pkg/util/wait"):
package wait
func For(cond func() (bool, error)){...}
So, a test writer may have condition as follows:
func TestSomething(t *testing.T){
wait.For(t, func() (bool, error) {
var ns coreV1.Namespace
cfg.client.List(ns)
...
)}
}
The framework could have a collection of pre-defined conditions that can be used by test writers:
func TestSomething(t *testing.T) {
var pod coreV1.Pod
wait.For(PodReadyCondition(pod))
}
As a test writer, I should be able to run kubectl
command i.e. apply, exec, log,...
The framework should provide a way to be able to run YAML files to deploy different k8s types (deployment, etc...)
KubectlApply(cfg.KubeconfigFile(), "namespace", "-f","./filepath.yaml")
Supply a dry-run flag so we can check which tests would be run given the current flags.
For example, AfterFeature
currently provides a hook that knows about the env but not the feature itself.
In hooks like this in the k/k framework, this is where we would tie into for things like custom logging solutions and even the Sonobuoy progress updates.
I think the signatures should be modified to have more info available to the caller. Even in BeforeXXX
it would be reasonable to want to do something that might involve knowing about the name of the test/feature or some assertion it is going to make.
Currently, the env.Environment method that registers pre-test callbacks is named BeforeTest
. That name could be more descriptive to indicate the frequency at which that those operations, registered with the method, are executed during test execution.
By changing the name to BeforeEachTest
test writers can clearly deduce when the test is executed during framework execution as shown in the following steps:
- env.Environment.Setup
- TestFunction
- env.Environment.BeforeEachTest
- env.Test(feature)
- env.Environment.AfterEachTest
- env.Environment.Finish
As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.
The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".
Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)
Thanks so much, let me know if you have any questions.
(This issue was generated from a tool, apologies for any weirdness.)
[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md
Talked about in TGIK 170; each test often needs its own namespace but its not completely trivial where/how to define it in order for each test to have a unique NS and them to be cleaned up appropriately.
I noticed that the version of controller runtime used is v0.9.0
Is it possible to bump this to a latter version of controller runtime? It seems that v0.9.0
of controller runtime uses an older version of spf13/cobra which has a few security vulnerabilities (which aren't actually ever executed)
Happy to open a PR! Thanks much!
We are using E2E 0.0.4 on https://github.com/K8sbykeshed/k8s-service-lb-validator
I'm using go test ./... -args --skip-labels="type=cluster_ip"
to skip tests with this label, but receiving the following error:
{"level":"info","ts":1637161494.262519,"caller":"matrix/manager.go:227","msg":"Server is ready","case":"81->80,TCP"}
flag provided but not defined: -skip-labels
Usage of /tmp/go-build1250012764/b001/k8s-service-lb-validator.test:
-test.bench regexp
run only benchmarks matching regexp
-test.benchmem
print memory allocations for benchmarks
-test.benchtime d
run each benchmark for duration d (default 1s)
-test.blockprofile file
write a goroutine blocking profile to file
-test.blockprofilerate rate
set blocking profile rate (see runtime.SetBlockProfileRate) (default 1)
-test.count n
run tests and benchmarks n times (default 1)
-test.coverprofile file
write a coverage profile to file
-test.cpu list
Wondering if some wrong bootstrap in our codebase.
After #36, it would be nice to introduce a pre- and post- operation hooks for feature tests named BeforeEachFeature
and AfterEachFeature
. These lifecycle hooks would get executed in the order shown
- env.Environment.Setup
- <TestFunction>
- env.Environment.BeforeEachTest
- env.Test(env.Environment.BeforeEachFeature <feature> env.Environment.AfterEachFeature)
- env.Environment.AfterEachTest
- env.Environment.Finish
As a test writer, I should be able to deploy helm
charts.
The framework should provide a type that allows test writers to install helm charts
type Helm []struct {
Name string
Namespace string
ReleaseName string
Version string
}
Type Helm should also include a method to install the helm chart
func TestMain(m *testing.M){
testenv = env.New()
kindClusterName := envconf.RandomName("kind-with-config", 16)
namespace := envconf.RandomName("kind-ns", 16)
testenv.Setup(
envfuncs.CreateKindClusterWithConfig(kindClusterName, "kindest/node:v1.22.2", "kind-config.yaml"),
envfuncs.CreateNamespace(namespace),
)
testenv.Finish(
envfuncs.DeleteNamespace(namespace),
envfuncs.DestroyKindCluster(kindClusterName),
)
os.Exit(testenv.Run(m))
}
func TestHelmChart(t *testing.T) {
helmInfo: = {
Name: "nginx",
Namespace: "default",
ReleaseName: "nginx-stable/nginx-ingress"
Version: "latest"
}
tests := features.New("Setup Helm Chart").
SetupFromHelm(func(helmInfo Helm))
.Feature()
test.Test(t, tests )
}
In order to debug the failed tests, sometimes there is a need to skip running the finish function.
for example, create a cluster in the setup function and delete it in the finish function.
If any of the tests are failing and run logs are not sufficient to root cause the issue, we may need to skip running the finish function so that one can log in to the live problematic cluster and troubleshoot the issue.
Let me know if any further details are required.
The examples in the repository only use one package (main) to contain main_test
and the go files containing the tests.
What would be the suggested approach to use multiple packages?
from
suites
│ featureset_test.go
│ filter_test.go
│ hello_test.go
│ main_test.go
to
suites
│ main_test.go
│ hello_test.go
│
└───somepackage
│ │ featureset_test.go
│ │ ...
│
└───other
│ │ filter_test.go
│ │ ...
In godog this is explicit, using InitializeScenario
Json input encoding decoding implementation needs a think through to support any kind of k8s object structure parsing.
The upstream Kubernetse e2e tests follow the pattern of using a dedicated namespace per test case. This helps with resource clean up and isolation. It would be great if the e2e framework provided this mechanism automatically or via opt-in. Perhaps a single testenv or a feature of a given testenv provides an isolated namespace with a generated name.
The feature
builder has a nice set of API exposed to be able to build setup and tear down methods. But they auto generate the name of the step that is being run.
fmt.Sprintf("%s-setup", b.feat.name)
However, this can be a bit of a problem to debug things. There is a really useful builder method called WithStep
that can let you customize the name of the step. However, the argument used level
used for this is coming from pkg/internal/types/level.go
// WithStep adds a new step that will be applied prior to feature test.
func (b *FeatureBuilder) WithStep(name string, level Level, fn Func) *FeatureBuilder {
b.feat.steps = append(b.feat.steps, newStep(name, level, fn))
return b
}
Which means we can't really use that or invoke it from anywhere outside. Which means the examples can't use that either or anything that injects e2e-framework
as a dependency. Does it make sense to move some bits from pkg/internal/types
to a reusable package ?
Correct misspelling of function name in config_test.go file under package conf.
These default tests logs is not much helpful, it prints all tests and when you add logs gets very annoying to understand whats going on, for example:
--- PASS: TestClusterIP (4.14s)
--- PASS: TestClusterIP/Cluster_IP (0.11s)
--- PASS: TestClusterIP/Cluster_IP/the_cluster_ip_should_be_reachable. (0.11s)
=== RUN TestNodePort
=== RUN TestNodePort/Node_Port
=== RUN TestNodePort/Node_Port/the_host_should_reachable_on_node_port
--- PASS: TestNodePort (0.00s)
--- PASS: TestNodePort/Node_Port (0.00s)
--- PASS: TestNodePort/Node_Port/the_host_should_reachable_on_node_port (0.00s)
=== RUN TestExternalService
=== RUN TestExternalService/External_Service
=== RUN TestExternalService/External_Service/the_external_DNS_should_be_reachable_via_local_service
--- PASS: TestExternalService (0.00s)
--- PASS: TestExternalService/External_Service (0.00s)
--- PASS: TestExternalService/External_Service/the_external_DNS_should_be_reachable_via_local_service (0.00s)
On the other end Ginkgo, has a dump summary of my tests, something like which tests were running/failed/passed, and it's not needed to check the console for process exit status of grep around the log, an example of one used on k/k e2e:
I0522 22:23:31.910] �[1m�[32mRan 435 of 5765 Specs in 1549.428 seconds�[0m
I0522 22:23:31.910] �[1m�[32mSUCCESS!�[0m -- �[32m�[1m435 Passed�[0m | �[91m�[1m0 Failed�[0m | �[33m�[1m0 Pending�[0m | �[36m�[1m5330 Skipped�[0m
I0522 22:23:31.919]
I0522 22:23:31.920]
I0522 22:23:31.920] Ginkgo ran 1 suite in 25m55.922452206s
I0522 22:23:31.920] Test Suite Passed
I0522 22:23:31.929] Checking for custom logdump instances, if any
Somehow this could be added to the configuration of the env maybe? the testEnv
struct could have another flag like summary bool.
Add new functionality to be able to create KinD cluster using image and config file.
Now that the klient
package has landed, the way that an environment
is created should be revisited. The environment type should use a klient
value to create/keep track of internal *rest.Client value.
the k/k framework uses ginkgo which has parallel mode (-p) for tests that were not explicitly defined as "parallel". this allows running the whole suite in parallel with N workers.
while tests in this framework can use t.Parallel() to opt-in into parallel execution, perhaps it would be possible to override that and run all tests in parallel, unless certain tests are "serial only".
the k/k framework does have have the [Serial]
tag for that and users of the parallel mode can skip them with -skip
.
K8S object watchers are great functionality provided by k8s to get efficient change notifications on resources.
The events supported by these watchers are
The idea here is to make developer implementation easier. Without knowing the resource core type of k8s objects they have to just register their actions/functions for respective watch events using the provision provided by this framework and to stay informed about when these events gets triggered, just use Watch() which resides inside klient/k8s/resources package.
Watch function accepts a object ObjectList
as an argument. ObjectList type is used to inject the resource type in which Watch has to be applied.
klient/k8s/resources/resources.go
import (
"sigs.k8s.io/controller-runtime/pkg/client"
"k8s.io/apimachinery/pkg/watch"
)
func (r *Resources) Watch(ctx context.Context,object client.ObjectList, opts client.ListOptions) watch.Interface {
cl, err := client.NewWithWatch(cfg, client.Options{})
if err != nil {
log.Println("error while creating a watcher client", err)
return
}
watcher, err := cl.Watch(ctx, object, &ops)
if err != nil {
log.Println("error while creating a watcher client", err)
return nil, err
}
return watcher
}
Watch() in resources.go will return the watcher
type which helps to call InvokeEventHandler()
. InvokeEventHandler accepts EventHandlerFuncs
which carries the user registered function sets.
file : klient/k8s/resources/watch.go
// InvokeEventHandler triggers the registered methods based on the event received for particular k8s resources.
func (watcher watch.Interface)InvokeEventHandler(f EventHandlerFuncs{}) {
for {
select {
case event, ok := <-watcher.ResultChan():
// retrieve the event type
eventType := event.Type
switch eventType {
case watch.Added:
f.Add(event.Object)
case watch.Modified:
f.Update(event.Object)
case watch.Deleted:
f.Delete(event.Object)
}
}
}
}
type EventHandlerFuncs struct {
AddFunc func(obj interface{})
UpdateFunc func(obj interface{})
DeleteFunc func(obj interface{})
}
func (e EventHandlerFuncs) Add(obj interface{}) {
...
}
func (e EventHandlerFuncs) Update(newObj interface{}) {
...
}
func (e EventHandlerFuncs) Delete(obj interface{}) {
...
}
Currently when Environment.Test(...)
is called from with a test function as shown below:
func TestSomething(t *testing.T) {
feature.New().Assess(t, ...)
env.Test(t, ...)
}
Only one instance of t
passed around from feature to feature. This can cause issues such as early termination of feature test. A better approach is to create a new *testing.T
for each feature. That way if a feature fails, the rest of the feature execution continues (within the same Test function).
Lets add the Label
functionality into examples, i think
they arent being used in the filters properly right now...
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_e2e-framework/48/pull-e2e-framework-test/1448290471733366784 is an example where even when tests were passing, kind was being installed multiple times. Something must be wrong with how kind is installed/checked for.
Would be cool if we could do pytest style xfails
or ExpectFails of some sort , i.e.
Add golangci-lint rules and prow job to run
Proper documentation is needed to walk user/adopter through:
Opening this item to track the comment suggested by @vladimirvivien under #73 (comment)
The sigs.k8s.io/e2e-framework/klient/wait
needs an example under the examples folder and a README
added to explain what the tests are doing.
When type Environment
was introduced, package klient
did not exist. Now that we have the helper package, the Environment type should be updated to use it directly instead.
The example here reveals the multi-steps necessary to create a klient.Client from a kubeconfig file and then inject that client into the environment's configuration.
e2e-framework/examples/k8s/main_test.go
Line 52 in 0402e81
It would be nice if the envconf.Config
type provided a method to handle the injection of the client in one step similar to:
func TestMain(m *testing.M){
testenv.Setup(
func(ctx context.Context, cfg *envconf.Config) (context.Context, error) {
cfg.WithKubeconfigFile(kubeconf_file_path) // this would create a new klient.Client, then inject it in the cfg
},
)
...
}
Just one cluster has been created, but when listing clusters there are numerous others already there.
These must have come from other runs.
May be related to #66 if it is an issue with how the hosts and being used.
We should add more examples showing:
k8s.io/kubernetes/test/e2e/framework
I'm finding a e2e framework to avoid importing k8s.io/kubernetes
in my project.
BTW: status of this repo? any future plan?
Currently, method env.Environment.Test
can only test one feature at a time:
func TestFunction(_ *testing.T) {
testenv := env.New()
f := feature.New(...).Assess(...)
testenv.Test(f.Feature())
}
As a test writer, I would like for env.Environment.Test
to be able to test a feature set consisting of one or more features as shown:
func TestFunction(_ *testing.T) {
testenv := env.New()
f0 := feature.New(...).Assess(...)
f1 := feature.New(...).Assess(...).Assess(...)
f2 := feature.New(...).Assess(...).Teardown(...)
testenv.Test(f0.Feature(), f1.Feature(), f2.Feature())
}
just for people to borrow
@perithompson would like to dump YAML From table tests such as the https://github.com/K8sbykeshed/k8s-service-lb-validator/ frameowrk, could we do something like:
Taking a struct such as this, it would be nice to directly put it into test output as a readable/exported entity
// Reachability packages the data for a cluster-wide connectivity probe
type Reachability struct {
Expected []string
Observed []string
Pods []*Pod
}
... psuedo code made with @vladimirvivien @jackielii
type MyYAML struct {}
// making a new programming language uup again for Jaice to parse
func (Writer* y) MyYAML(interface{} tableOutput) string {
for blah,_ := tableOutput (*MyTable) {
s += result + "\n"
}
return s
}
func myTest() {
e := env.NewWithConfig(envconf.New())
feat := features.New("Hello Feature").
WithLabel("type", "simple").
Assess("test message", func(ctx context.Context, t *testing.T, _ *envconf.Config) context.Context {
result := Hello("foo")
if result != "Hello foo" {
t.Error("unexpected message")
}
return ctx
})
e.WithOutputWriter(Test(t, feat.Feature()), MyYAML())
}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.