itscontained / secret-manager Goto Github PK
View Code? Open in Web Editor NEWExternal secret management for Kubernetes.
License: Apache License 2.0
External secret management for Kubernetes.
License: Apache License 2.0
Readme has too much detail, examples should move to docs with more detail for the supported SecretStore backends.
The template
field is currently available for this. The fields in template
are merged in with the generated secret. This functionality has not been added to the ExternalSecrets controller.
https://github.com/itscontained/secret-manager/blob/master/pkg/internal/store/base/store.go#L53. Any error from setting up a Vault/AWS or other SecretStore client is overwritten with a generic error.
klog
flags and other flags added seem to be overwritten by secret-manager flags.
Example:
13:12 $ docker run itscontained/secret-manager:0.2.0 --help
secret-manager is a Kubernetes addon to automate the management and issuance of
secret sources from various external secret systems.
Usage:
secret-manager-controller [flags]
Flags:
--health-port int The port number to listen on for health connections. (default 8400)
-h, --help
without -v/--v klog flags.
When using the following SecretStore manifest, I've got a 404 in the logs because /data is appended.
apiVersion: secret-manager.itscontained.io/v1alpha1
kind: SecretStore
metadata:
name: vault
namespace: example-ns
spec:
vault:
server: "https://xxx:8200"
path: testkv
version: v1
...
Between PR template and the standard format of Changelog, should be able to maintain most of changelog structure through automation rather than manually.
Keeping legacy CRD's adds support overhead (like #43). These legacy CRD's also do not support resource conversion so can only use the storage version for each CRD.
My suggestion would be maintain best-effort support until Kubernetes v1.21 is released upstream (~March 2021) and then deprecate these and move the require kubeVersion in the chart.yaml from v1.11 to v1.16.
helm upgrade -i secret-manager -f values.yaml itscontained/secret-manager
Release "secret-manager" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: unable to recognize
"": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
values.yaml
installCRDs: true
K8S version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.8", GitCommit:"a89f8c11a5f4f132503edbc4918c98518fd504e3", GitTreeState:"clean", BuildDate:"2019-04-23T04:41:47Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Any ideas? Is there a minimum k8s version required?
I have the workflow properly working now, but the chart (and soon docker container) need to have their app version dictated by git tag. This will alleviate manual updating of in-code versioning.
Successful Workflow: https://github.com/itscontained/secret-manager/actions/runs/221652504
for the chart, i suggest setting appVersion
to 0.0.0
and we can do a sed replace in the workflow
for the application, ldflags will let us inject
AppGitState = ""
AppGitCommit = ""
AppVersion = "canary"
via
-X github.com/itscontained/secret-manager/pkg/util.AppVersion={.Version}
In addition, those 3 variables, if only referenced for version info below, do not need to be exportable.
The current helm chart is just a stub from helm create
. The chart should be more fully implemented and working.
The dataFrom
block is similiar to data
but does not take a secretKey
field to identify where to embed the secret data, instead the secret data is expected to return a map of data and be embedded in the final generated secret object.
Currently this just takes a single dataFrom element, but there is a use case for taking a list of secret paths and merging their maps. A deterministic method of merging needs to be decided. This may also allow use of both data
and dataFrom
within the same secrets if the similar deterministic method of merging maps is used.
For dataFrom
blocks, my initial consideration is to have the first element be the base and merge each path referenced ascending the list by index. If an element later in the list has a map with the same key, the element later in the list will overwrite earlier elements.
For ExternalSecrets with both data
and dataFrom
, the dataFrom map would be generated first as it is the least specific of the two, and the data
block would be merged in, with any secretKey
reference in data
overwriting keys from the data
block if present in both.
Describe the bug:
Running make test
produces the following error:
Failure [0.022 seconds]
[BeforeSuite] BeforeSuite
~/oss/secret-manager/pkg/controller/externalsecret/suite_test.go:58
Unexpected error:
<*fmt.wrapError | 0xc000708de0>: {
msg: "failed to start the controlplane. retried 5 times: fork/exec /usr/lo
cal/kubebuilder/bin/etcd: no such file or directory",
err: {
Op: "fork/exec",
Path: "/usr/local/kubebuilder/bin/etcd",
Err: 0x2,
},
}
failed to start the controlplane. retried 5 times: fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory occurred
Expected behavior
Tests are able to run
Steps to reproduce the bug:
On macOS:
brew install kubebuilder
make test
.Environment details::
macOS
± kubebuilder version
Version: version.Version{KubeBuilderVersion:"unknown", KubernetesVendor:"unknown", G
itCommit:"$Format:%H$", BuildDate:"1970-01-01T00:00:00Z", GoOs:"unknown", GoArch:"un
known"}
/kind bug
Generated secret should pass any/all labels that the ExternalSecret has (but be toggleable) and have label(s) that define what manages it (in addition to the ownerreference).
Added log at
secret-manager/pkg/internal/vault/vault.go
Line 150 in 999e7d0
fmt.Printf("readSecret secret data: %+v\n", secretData)
Verified secret is not base64 encoded via logs:
2020-10-12T22:15:30.833546792Z readSecret secret data: map[data:map[another_one:awesome yessssssssss:this is secret yetibot_secret:secret value] metadata:map[created_time:2020-10-09T19:08:57.676679845Z deletion_time: destroyed:false version:3]]
readSecret secret data: map[data:map[embedded-enabled:false observers-enabled:false] metadata:map[created_time:2020-09-21T15:41:58.434072039Z deletion_time: destroyed:false version:1]]
Then looked at the K8S Secret that secret-manager created:
± k get secret yetibot-es -ojson | jq -r '.data'
{
"another_one": "WVhkbGMyOXRaUT09",
"embedded-enabled": "Wm1Gc2MyVT0=",
"observers-enabled": "Wm1Gc2MyVT0=",
"yessssssssss": "ZEdocGN5QnBjeUJ6WldOeVpYUT0=",
"yetibot_secret": "YzJWamNtVjBJSFpoYkhWbA=="
}
± k get secret yetibot-es -ojson | jq -r '.data["another_one"]'
WVhkbGMyOXRaUT09
± k get secret yetibot-es -ojson | jq -r '.data["another_one"]' | base64 -d
YXdlc29tZQ==%
± k get secret yetibot-es -ojson | jq -r '.data["another_one"]' | base64 -d | base64
-d
awesome%
Then I went into controller.go and poked around. Removing base64 encoding like secretDataMap[secretKey] = secretData
here fixed my problem:
secret-manager/pkg/controller/externalsecret/controller.go
Lines 179 to 180 in 8ea959e
I also added logging to verify that the data wasn't already base64 encoded with:
fmt.Printf("Secret %s = %s\n", secretKey, v)
which prints:
2020-10-12T23:48:31.070304612Z Secret yessssssssss = this is secret
2020-10-12T23:48:31.070308806Z Secret yetibot_secret = secret value
2020-10-12T23:48:31.070313366Z Secret another_one = awesome
2020-10-12T23:48:31.070317958Z Secret embedded-enabled = false
2020-10-12T23:48:31.070322966Z Secret observers-enabled = false
Running off my own docker image based on commit 999e7d0.
Ideas? 🤔
Current interface is
// Factory returns a StoreClient
type Factory interface {
New(ctx context.Context, log logr.Logger, store smv1alpha1.GenericStore, kubeClient client.Client, kubeReader client.Reader, namespace string) (Client, error)
}
This could be simplified to
// Factory returns a StoreClient
type Factory interface {
New(ctx context.Context, kubeClient client.Client, store smv1alpha1.GenericStore, namespace string) (Client, error)
}
The logger can be moved within the context, and the Kubernetes interfaces could be merged.
I use charts.itscontained.io to install your helm chart, but today I'm getting an error message telling me
Error: looks like "https://charts.itscontained.io" is not a valid chart repository or cannot be reached: Get https://charts.itscontained.io/index.yaml: dial tcp 167.99.26.57:443: connect: connection timed out
Are you temporarily down or did you permanently take charts.itscontained.io down?
Is your feature request related to a problem? Please describe.
I would like to create k8s secret with key "config.json" and value as SecretString retrieved from AWS SSM.
Describe the solution you'd like
apiVersion: secret-manager.itscontained.io/v1alpha1
kind: ExternalSecret
metadata:
name: app-config
namespace: app
spec:
storeRef:
name: aws-ssm
data:
- secretKey: config.json
remoteRef:
name: app-config
property: "" // empty string denotes full secret string
Describe alternatives you've considered
There is no way to do it right now.
Is your feature request related to a problem? Please describe.
Currently condition message provides a very detailed error fields but no events are recorded for the ExternalSecret resource.
Describe the solution you'd like
Kubernetes events should be used to provide more detailed error messages and history of attempts and most recent errors. This would be in line with how both built-in controllers and other Kubernetes addons operate.
/kind feature
Hi Team ,
It is mentioned in the Operator README that Azure KV is planned to be delivered , so i am really curious to know if there are any expected timelines set for delivering this backend . If not yet , is there any sort of "Guidelines" to add any backend to be supported by the secret-manager operator ?
Thanks a lot.
SDK v0.25.0 has breaking changes from the currently used version (v0.24.0
). But should have increased performance and move closer to v1 as per AWS's blog post .
v0.26.0 is out as of writing this but seems to have no significant breaking changes between 0.25.
Describe the bug:
Installing the chart multiple times on the same cluster isn't possible because of the ClusterRoles colliding.
Given 2 namespace:
Running:
helm install --namespace ns1 secret-manager itscontained/secret-manager --set namespace=ns1
helm install --namespace ns2 secret-manager itscontained/secret-manager --set namespace=ns2
The first install will work.
The second install will fail:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "secret-manager-controller" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "ns2": current value is "ns1"
Expected behavior
The helm chart should be deployable multiple times on the same cluster and cluster-roles shouldn't collide.
Steps to reproduce the bug:
helm install --namespace ns1 secret-manager itscontained/secret-manager --set namespace=ns1
helm install --namespace ns2 secret-manager itscontained/secret-manager --set namespace=ns2
Anything else we should know?
Nope, other than I appreciate the work put in maintaining this open source project.
Environment details::
/kind bug
Describe the solution you'd like
Add SecretStore for secrets store in another cluster (namespace or cluster-wide).
Describe alternatives you've considered
Moving secrets to cloud provider secret and each each cluster having a SecretStore which has authentication to the cloud provider store.
Additional context
Our use-case is that due to network partition, one cluster has access to a secret backend, but other clusters do not. The cluster with network access to the secret backend is accessible to other clusters, so this could provide a link if needed to secrets from the other cluster.
May need more thought, some ideas around only accessing explicit Secrets
which already exist, or whether relaying ExternalSecrets
is allowed (e.g. ExternalSecret in Cluster B creates new ExternalSecret in cluster A which which has access, secret is propogated to cluster B to be used by workloads).
/kind feature
Would be nice to have issue/PR commands available for labeling and possibly running CI tests. We don't need most of the Prow features and I don't have much familiarity with how configurable the system is.
Build/push pipeline seems broken, looks like the base image didn't have explicit multi-arch support, but worked because the image manifest didn't specify architecture and has no binaries, so effectively supported arm without needing different images per arch.
I think the choice for today is either drop arm/v7 until GoogleContainerTools/distroless#377 is closed, or continue using the previous image (now available under gcr.io/distroless/static:nonroot-amd64
rather than gcr.io/distroless/static:nonroot
)
Using helm template README, I tried to reach https://charts.itscontained.io without success.
Do you have an installation guide without helm ?
Sorry to open an issue, the discord link replies a 404.
Hey guys, would you mind cutting a release on latest master?
Specifically our team needs #80
Thanks!
Add field to ExternalSecret to control refresh period rather than relying on occasional controller resyncs.
Current design is to add the field refreshInterval
of type time.Duration.
Is your feature request related to a problem? Please describe.
Only one controller using "ambient" credentials (credentials which make up the controllers environment variables or volumes) can be used per namespace/cluster. The controller with ambient creds has no method of scoping what SecretStore's the controller is owner of.
Describe the solution you'd like
Add controller
field to SecretStore types which can provide an optional scoping mechanism when either using multiple secret-managers with ambient credentials or mixed ambient/explicit creds. An example would be to deploy a controller with AWS IRSA authentication which has ambient creds for aws secret manager and/or vault secret store with AWS auth, but this controller is scoped to only SecretStore's with "aws" string in the controller field. Another secret-manager could be deployed to cover explicit default controllers.
Describe alternatives you've considered
One other solution is to disallow the use of ambient credentials and force explicit credentials. This may not be preferred as cloud providers can offer secure ways to manage ambient credentials for their systems (like AWS IRSA #77) .
/kind feature
Hi there!
Describe the bug:
Not able to create an AWS Secrets Manager secret, got the error:
2020-11-11T12:27:47.7662466Z E1111 12:27:47.765945 1 controller.go:117] controllers/ExternalSecret "msg"="error while reconciling ExternalSecret" "error"="cannot get ExternalSecret data from store: name \"mySecret\": error getting secret value: unknown endpoint, could not resolve endpoint, partition: \"all partitions\", service: \"secretsmanager\", region: \"eu-central-1\"" "externalsecret"={"Namespace":"default","Name":"test-one"}
The following resources were created:
apiVersion: secret-manager.itscontained.io/v1alpha1
kind: SecretStore
metadata:
name: aws-secret-store
spec:
aws:
authSecretRef:
accessKeyID:
key: access-key
name: aws-key
secretAccessKey:
key: secret-key
name: aws-key
apiVersion: secret-manager.itscontained.io/v1alpha1
kind: ExternalSecret
metadata:
name: test-one
spec:
data:
- remoteRef:
name: mySecret
property: key1
secretKey: key1
storeRef:
name: aws-secret-store
The Secret used to authenticate on AWS is as follows:
apiVersion: v1
kind: Secret
metadata:
name: aws-key
data:
access-key: <redacted>
secret-key: <redacted>
type: Opaque
The AWS Secrets Manager has the following secret:
$ aws secretsmanager get-secret-value --secret-id mySecret --region eu-central-1
{
"ARN": "redacted",
"Name": "mySecret",
"VersionId": "35645498-8b3a-42bb-9e81-075807822659",
"SecretString": "{\"key1\":\"value1\"}",
"VersionStages": [
"AWSCURRENT"
],
"CreatedDate": "2020-11-10T14:35:03.762000+01:00"
}
I'm also used the Administrator
user in the AWS authentication to eliminate any possible permission errors.
Expected behavior
Generate a new Secret with the information from AWS Secrets Manager.
Steps to reproduce the bug:
secret-manager
via HelmEnvironment details::
/kind bug
Based on changes discussed in external-secrets/kubernetes-external-secrets#477. Changes to data block and template.
Example of larger breaking changes:
apiVersion: secret-manager.itscontained.io/v1alpha1
kind: ExternalSecret
metadata:
name: example
spec:
data:
- remoteRef:
name: path/or-id
property: value
secretKey: example
storeRef:
name: vault
template:
metadata:
annotations:
new-annotation: example
becomes
apiVersion: secret-manager.itscontained.io/v1alpha2
kind: ExternalSecret
metadata:
name: example
spec:
data:
example:
key: path/or-id
property: value
storeRef:
name: vault
target:
template:
metadata:
annotations:
new-annotation: example
Hello folks,
I am trying to validate the full e2e testing framework locally but am I having an issue with Smoke and AWS test.
Once the pods are up and running the following error appears in the POD log:
E1230 07:34:13.075110 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:e2e-smoke-c8f7e541-1805-4065-8d2f-8a7433cb1a73:secret-manager-smoke" cannot list resource "secrets" in API group "" at the cluster scope
In order to let it work I had to add the brand new serviceaccounts (secret-manager-smoke and secret-manager-aws), created at namespace level, into permissive-binding clusterrolebinding as they lack cluster-wide permissions.
I am not sure to understand where/how the service accounts are created and I'm wondering why the automatic build on this repo does not get affected by this issue.
Can you please give me any hint?
Is your feature request related to a problem? Please describe.
I'd like to use this controller to provision secrets from
I suppose it should work to use the token within the service account secret with tokenSecretRef
in the vault SecretStore, however i haven't been able to get this to work and it's quite inconvenient because the secret name isn't easily template-able.
Describe the solution you'd like
Allow the vault SecretStore type to use IAM authentication similar to vault agent.
Describe alternatives you've considered
Not sure what other sensible alternatives there are. maybe simply referencing a serviceaccount directly and have the controller pull the token from the associated secret? Not sure if this even makes RBAC-wise.
Additional context
/kind feature
The idea here is that the fetched secret data could be placed within the context of a larger config which is not a secreet. This would avoid having to place the entire configuration into the secret store.
An example secret:
{
"data": {
"serviceBapiKey": "foo-123",
"serviceCapiKey": "bar-456"
}
}
Would allow for a secret like:
apiVerson: secret-manager.itscontained.io/v1alpha1
kind: ExternalSecret
metadata:
name: hello-service-config
namespace: example-ns
spec:
storeRef:
name: vault
data:
- secretKey: password
remoteRef:
path: teamA/hello-service
property: serviceBapiKey
template:
data:
config.json: |
{
"apiUrl": "http://localhost:12345",
"apiKey": {{ .data.password | quote }}
}
Which produces:
apiVersion: v1
kind: Secret
metadata:
name: hello-service-config
namespace: example-ns
type: Opaque
data:
config.json: "ewogICJhcGlVcmwiOiAiaHR0cDovL2xvY2FsaG9zdDoxMjM0NSIsCiAgImFwaUtleSI6ICJmb28tMTIzIgp9"
# config.json: |
# {
# "apiUrl": "http://localhost:12345"
# "apiKey": "foo-123"
# }
We need a docker build/release workflow.
Architecture: Just AMD64? Or also arm? if arm do we need v6 and 8?
Are we pushing to dockerhub? Do we want to push to github as well? (Dont see the gain but worth asking)
Do you want dirty builds on all commits?
Describe the bug:
The template
patch field in the ExternalSecret CRD is not handled correctly in the controller. The internal datatype is wrong, making it impossible to use the field.
Setting a basic template leads to errors like this one:
Failed to watch *v1alpha1.ExternalSecret: failed to list *v1alpha1.ExternalSecret: v1alpha1.ExternalSecretList.Items: []v1alpha1.ExternalSecret: v1alpha1.ExternalSecret.Spec: v1alpha1.ExternalSecretSpec.Template: base64Codec: invalid input, error found in #10 byte of ...|template":{"type":"k|..., bigger context ...|n"}],"storeRef":{"name":"aws-secrets"},"template":{"type":"kubernetes.io/dockerconfigjson"}},"status|...
Expected behavior
Template should patch the resulting secret
Steps to reproduce the bug:
Set a template patch like the example above (JSON format: {"type":"kubernetes.io/dockerconfigjson"}}
)
Anything else we need to know?:
Filed a fix proposal PR #84
Environment details::
/kind bug
Describe the bug:
The secret-manager Pod restarts because of a Panic in secret-manager. This started happening after I created a few SecretStores
(all Vault, 1 per-namespace) and 1 ExternalSecret
.
Here's the panic:
I1017 15:38:24.813486 1 controller.go:132] Starting secret manager controller: version (v0.2.0) (d6720b56b6b27b68878d6e0e49515874ce0fb8b3)
W1017 15:38:24.814592 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1017 15:38:24.906212 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"=":9321"
I1017 15:38:24.906770 1 controller.go:109] Starting manager
I1017 15:38:24.906895 1 leaderelection.go:242] attempting to acquire leader lease cluster-services/secret-manager-controller...
I1017 15:38:24.907076 1 internal.go:391] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I1017 15:39:42.380998 1 leaderelection.go:252] successfully acquired lease cluster-services/secret-manager-controller
I1017 15:39:42.381453 1 controller.go:139] controller "msg"="Starting EventSource" "controller"="externalsecret" "reconcilerGroup"="secret-manager.itscontained.io" "reconcilerKind"="ExternalSecret" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"storeRef":{"name":""}},"status":{}}}
I1017 15:39:42.481835 1 controller.go:139] controller "msg"="Starting EventSource" "controller"="externalsecret" "reconcilerGroup"="secret-manager.itscontained.io" "reconcilerKind"="ExternalSecret" "source"={"Type":{"metadata":{"creationTimestamp":null}}}
I1017 15:39:42.482006 1 controller.go:146] controller "msg"="Starting Controller" "controller"="externalsecret" "reconcilerGroup"="secret-manager.itscontained.io" "reconcilerKind"="ExternalSecret"
I1017 15:39:42.482093 1 controller.go:167] controller "msg"="Starting workers" "controller"="externalsecret" "reconcilerGroup"="secret-manager.itscontained.io" "reconcilerKind"="ExternalSecret" "worker count"=1
E1017 15:39:42.487074 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 250 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x175d340, 0x2996dc0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x82
panic(0x175d340, 0x2996dc0)
/usr/local/go/src/runtime/panic.go:969 +0x166
github.com/itscontained/secret-manager/pkg/internal/vault.(*Vault).setToken(0xc000081180, 0x1d314e0, 0xc000122018, 0x1d316a0, 0xc000392a10, 0x1, 0xc0000cc860)
/workspace/pkg/internal/vault/vault.go:217 +0x2dd
github.com/itscontained/secret-manager/pkg/internal/vault.New(0x1d314e0, 0xc000122018, 0x1d3da40, 0xc00049c6f0, 0x1d45d80, 0xc00024a090, 0x1d60fe0, 0xc0004ce780, 0xc000309bf0, 0x10, ...)
/workspace/pkg/internal/vault/vault.go:80 +0x177
github.com/itscontained/secret-manager/pkg/internal/store/base.(*Default).New(0x29e6f48, 0x1d314e0, 0xc000122018, 0x1d3da40, 0xc00049c6f0, 0x1d60fe0, 0xc0004ce780, 0x1d45d80, 0xc00024a090, 0x7f717e1409a8, ...)
/workspace/pkg/internal/store/base/store.go:42 +0x13e
github.com/itscontained/secret-manager/pkg/controller/externalsecret.(*ExternalSecretReconciler).Reconcile.func1(0x0, 0x0)
/workspace/pkg/controller/externalsecret/controller.go:90 +0x1ea
sigs.k8s.io/controller-runtime/pkg/controller/controllerutil.mutate(0xc000080f00, 0xc000309bf0, 0x10, 0xc000309bd0, 0xd, 0x1d027e0, 0xc0004ce640, 0x1d027e0, 0xc0004ce640)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/controller/controllerutil/controllerutil.go:228 +0x2b
sigs.k8s.io/controller-runtime/pkg/controller/controllerutil.CreateOrUpdate(0x1d314e0, 0xc000122018, 0x1d45d80, 0xc00024a090, 0x1d027e0, 0xc0004ce640, 0xc000080f00, 0x1cfed20, 0xc00016d340, 0x0, ...)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/controller/controllerutil/controllerutil.go:202 +0x1db
github.com/itscontained/secret-manager/pkg/controller/externalsecret.(*ExternalSecretReconciler).Reconcile(0xc0004967e0, 0xc000309bf0, 0x10, 0xc000309bd0, 0xd, 0xc0006b4540, 0xc0006e2bd0, 0xc0001d5688, 0xc0001d5680)
/workspace/pkg/controller/externalsecret/controller.go:84 +0x3b8
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0004d2000, 0x17c57a0, 0xc000836e00, 0x0)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235 +0x284
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0004d2000, 0x203000)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209 +0xae
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0004d2000)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0002b9000)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002b9000, 0x1cf6b40, 0xc0006b4510, 0x1, 0xc000100660)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002b9000, 0x3b9aca00, 0x0, 0xc000455b01, 0xc000100660)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc0002b9000, 0x3b9aca00, 0xc000100660)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:170 +0x411
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x13d535d]
goroutine 250 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
panic(0x175d340, 0x2996dc0)
/usr/local/go/src/runtime/panic.go:969 +0x166
github.com/itscontained/secret-manager/pkg/internal/vault.(*Vault).setToken(0xc000081180, 0x1d314e0, 0xc000122018, 0x1d316a0, 0xc000392a10, 0x1, 0xc0000cc860)
/workspace/pkg/internal/vault/vault.go:217 +0x2dd
github.com/itscontained/secret-manager/pkg/internal/vault.New(0x1d314e0, 0xc000122018, 0x1d3da40, 0xc00049c6f0, 0x1d45d80, 0xc00024a090, 0x1d60fe0, 0xc0004ce780, 0xc000309bf0, 0x10, ...)
/workspace/pkg/internal/vault/vault.go:80 +0x177
github.com/itscontained/secret-manager/pkg/internal/store/base.(*Default).New(0x29e6f48, 0x1d314e0, 0xc000122018, 0x1d3da40, 0xc00049c6f0, 0x1d60fe0, 0xc0004ce780, 0x1d45d80, 0xc00024a090, 0x7f717e1409a8, ...)
/workspace/pkg/internal/store/base/store.go:42 +0x13e
github.com/itscontained/secret-manager/pkg/controller/externalsecret.(*ExternalSecretReconciler).Reconcile.func1(0x0, 0x0)
/workspace/pkg/controller/externalsecret/controller.go:90 +0x1ea
sigs.k8s.io/controller-runtime/pkg/controller/controllerutil.mutate(0xc000080f00, 0xc000309bf0, 0x10, 0xc000309bd0, 0xd, 0x1d027e0, 0xc0004ce640, 0x1d027e0, 0xc0004ce640)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/controller/controllerutil/controllerutil.go:228 +0x2b
sigs.k8s.io/controller-runtime/pkg/controller/controllerutil.CreateOrUpdate(0x1d314e0, 0xc000122018, 0x1d45d80, 0xc00024a090, 0x1d027e0, 0xc0004ce640, 0xc000080f00, 0x1cfed20, 0xc00016d340, 0x0, ...)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/controller/controllerutil/controllerutil.go:202 +0x1db
github.com/itscontained/secret-manager/pkg/controller/externalsecret.(*ExternalSecretReconciler).Reconcile(0xc0004967e0, 0xc000309bf0, 0x10, 0xc000309bd0, 0xd, 0xc0006b4540, 0xc0006e2bd0, 0xc0001d5688, 0xc0001d5680)
/workspace/pkg/controller/externalsecret/controller.go:84 +0x3b8
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0004d2000, 0x17c57a0, 0xc000836e00, 0x0)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235 +0x284
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0004d2000, 0x203000)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209 +0xae
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0004d2000)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0002b9000)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002b9000, 0x1cf6b40, 0xc0006b4510, 0x1, 0xc000100660)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002b9000, 0x3b9aca00, 0x0, 0xc000455b01, 0xc000100660)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc0002b9000, 0x3b9aca00, 0xc000100660)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:170 +0x411
Expected behavior
An error message, not a panic.
Steps to reproduce the bug:
apiVersion: secret-manager.itscontained.io/v1alpha1
kind: SecretStore
metadata:
name: vault
namespace: cluster-services
spec:
vault:
auth:
kubernetes:
mountPath: cluster-alpha
role: vault-authentication
path: secret/data
server: ""
apiVersion: secret-manager.itscontained.io/v1alpha1
kind: ExternalSecret
metadata:
name: hello-service
namespace: cluster-services
spec:
data:
- remoteRef:
name: kv_test/hugues
property: a
secretKey: password
storeRef:
name: vault
Anything else we need to know?:
It's possible the authentication on my end is not correctly configured and/or that the paths specified in my ExternalSecret are wrong and don't exist in my Vault.
At any rate, secret-manager shouldn't panic.
Environment details::
/kind bug
The Vault client has been added but needs increased test coverage.
Current use cases:
template
is a valid secret which can be merged with the generated secretDescribe the bug:
On a brand new installation of secret-manager
with the default values (no value modified)
$ helm get values secret-manager --all
COMPUTED VALUES:
affinity: {}
apiServerHost: ""
extraArgs: []
fullnameOverride: ""
healthCheck:
enabled: true
port: 8400
image:
pullPolicy: IfNotPresent
repository: itscontained/secret-manager
tag: ""
imagePullSecrets: []
installCRDs: false
kubeConfig: ""
leaderElect: true
nameOverride: ""
namespace: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
prometheus:
enabled: false
service:
annotations: {}
labels: {}
port: 9321
rbac:
create: true
replicaCount: 1
resources: {}
securityContext: {}
serviceAccount:
annotations: {}
create: true
name: ""
tolerations: []
It seems the pod has been restarting a few times
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
secret-manager-79d4d7d758-qqswk 1/1 Running 10 40m
The logs are the following:
I1015 04:24:50.801817 1 controller.go:132] Starting secret manager controller: version (v0.2.0) (d6720b56b6b27b68878d6e0e49515874ce0fb8b3)
W1015 04:24:50.802776 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1015 04:24:50.899921 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"=":9321"
I1015 04:24:50.900466 1 controller.go:109] Starting manager
I1015 04:24:50.900542 1 leaderelection.go:242] attempting to acquire leader lease cluster-services/secret-manager-controller...
I1015 04:24:50.900777 1 internal.go:391] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I1015 04:26:08.369311 1 leaderelection.go:252] successfully acquired lease cluster-services/secret-manager-controller
I1015 04:26:08.369778 1 controller.go:139] controller "msg"="Starting EventSource" "controller"="externalsecret" "reconcilerGroup"="secret-manager.itscontained.io" "reconcilerKind"="ExternalSecret" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"storeRef":{"name":""}},"status":{}}}
E1015 04:26:08.381047 1 source.go:117] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"ExternalSecret\" in version \"secret-manager.itscontained.io/v1alpha1\"" "kind"={"Group":"secret-manager.itscontained.io","Kind":"ExternalSecret"}
E1015 04:26:08.399643 1 controller.go:111] Error while running manager: no matches for kind "ExternalSecret" in version "secret-manager.itscontained.io/v1alpha1"
Error: no matches for kind "ExternalSecret" in version "secret-manager.itscontained.io/v1alpha1"
Usage:
secret-manager-controller [flags]
Flags:
--health-port int The port number to listen on for health connections. (default 8400)
-h, --help help for secret-manager-controller
--kubeconfig string Path to a kubeconfig. Only required if out-of-cluster.
--leader-elect If true, secret-manager will perform leader election between instances to ensure no more than one instance of secret-manager operates at a time (default true)
--leader-election-lease-duration duration The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but un-renewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. (default 1m0s)
--leader-election-namespace string Namespace used to perform leader election. Only used if leader election is enabled (default "kube-system")
--leader-election-renew-deadline duration The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. (default 45s)
--leader-election-retry-period duration The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. (default 15s)
--master string Optional ApiServer host address to connect to. If not specified, autoconfiguration will be attempted.
--metric-port int The port number that the metrics endpoint should listen on. (default 9321)
--namespace string If set, this limits the scope of secret-manager to a single namespace and ClusterSecretStores are disabled. If not specified, all namespaces will be watched
F1015 04:26:08.400131 1 main.go:33] error executing command: no matches for kind "ExternalSecret" in version "secret-manager.itscontained.io/v1alpha1"
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000124001, 0xc00067e000, 0x9d, 0xf6)
/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:996 +0xb8
k8s.io/klog/v2.(*loggingT).output(0x29bb700, 0xc000000003, 0x0, 0x0, 0xc0000aa000, 0x2910fda, 0x7, 0x21, 0x0)
/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:945 +0x19d
k8s.io/klog/v2.(*loggingT).printf(0x29bb700, 0x3, 0x0, 0x0, 0x197fc9e, 0x1b, 0xc000a9ff68, 0x1, 0x1)
/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:733 +0x17b
k8s.io/klog/v2.Fatalf(...)
/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1463
main.main()
/workspace/cmd/controller/main.go:33 +0x138
goroutine 18 [chan receive, 1 minutes]:
k8s.io/klog.(*loggingT).flushDaemon(0x29bb620)
/go/pkg/mod/k8s.io/[email protected]/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
/go/pkg/mod/k8s.io/[email protected]/klog.go:411 +0xd6
goroutine 5 [select, 1 minutes]:
go.opencensus.io/stats/view.(*worker).start(0xc000382000)
/go/pkg/mod/[email protected]/stats/view/worker.go:276 +0x100
created by go.opencensus.io/stats/view.init.0
/go/pkg/mod/[email protected]/stats/view/worker.go:34 +0x68
goroutine 6 [chan receive, 1 minutes]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x29bb700)
/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0xd6
goroutine 50 [IO wait]:
internal/poll.runtime_pollWait(0x7f801f028f18, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc000302598, 0x72, 0x4300, 0x4363, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000302580, 0xc0006e0000, 0x4363, 0x4363, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:169 +0x19b
net.(*netFD).Read(0xc000302580, 0xc0006e0000, 0x4363, 0x4363, 0x203000, 0x420715, 0xc0009afdc0)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc00000e3e0, 0xc0006e0000, 0x4363, 0x4363, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:184 +0x8e
crypto/tls.(*atLeastReader).Read(0xc0009afdc0, 0xc0006e0000, 0x4363, 0x4363, 0x362, 0x431a, 0xc0001c79a8)
/usr/local/go/src/crypto/tls/conn.go:760 +0x60
bytes.(*Buffer).ReadFrom(0xc000113ad8, 0x1cf4d60, 0xc0009afdc0, 0x40a1e5, 0x177a6e0, 0x18f31e0)
/usr/local/go/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc000113880, 0x1cf7100, 0xc00000e3e0, 0x5, 0xc00000e3e0, 0x8)
/usr/local/go/src/crypto/tls/conn.go:782 +0xec
crypto/tls.(*Conn).readRecordOrCCS(0xc000113880, 0x0, 0x0, 0xc0001c7d18)
/usr/local/go/src/crypto/tls/conn.go:589 +0x115
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:557
crypto/tls.(*Conn).Read(0xc000113880, 0xc000476000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:1233 +0x15b
bufio.(*Reader).Read(0xc00046c4e0, 0xc000454118, 0x9, 0x9, 0xc0001c7d18, 0x1a2ab00, 0x9198c5)
/usr/local/go/src/bufio/bufio.go:226 +0x24f
io.ReadAtLeast(0x1cf4ba0, 0xc00046c4e0, 0xc000454118, 0x9, 0x9, 0x9, 0xc000102050, 0x0, 0x1cf4f00)
/usr/local/go/src/io/io.go:310 +0x87
io.ReadFull(...)
/usr/local/go/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0xc000454118, 0x9, 0x9, 0x1cf4ba0, 0xc00046c4e0, 0x0, 0x0, 0xc0009d4540, 0x0)
/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x87
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0004540e0, 0xc0009d4540, 0x0, 0x0, 0x0)
/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:492 +0xa1
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0001c7fa8, 0x0, 0x0)
/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:1794 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000001980)
/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:1716 +0x6f
created by golang.org/x/net/http2.(*Transport).newClientConn
/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:695 +0x64a
goroutine 38 [syscall, 1 minutes]:
os/signal.signal_recv(0x463e66)
/usr/local/go/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.Notify.func1
/usr/local/go/src/os/signal/signal.go:127 +0x44
goroutine 40 [chan receive, 1 minutes]:
sigs.k8s.io/controller-runtime/pkg/manager/signals.SetupSignalHandler.func1(0xc000077980, 0xc000380300)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/signals/signal.go:36 +0x34
created by sigs.k8s.io/controller-runtime/pkg/manager/signals.SetupSignalHandler
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/signals/signal.go:35 +0xd0
goroutine 41 [chan receive, 1 minutes]:
k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc000297240)
/go/pkg/mod/k8s.io/[email protected]/pkg/watch/mux.go:207 +0x66
created by k8s.io/apimachinery/pkg/watch.NewBroadcaster
/go/pkg/mod/k8s.io/[email protected]/pkg/watch/mux.go:75 +0xcc
goroutine 174 [chan receive]:
k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x1d04aa0, 0xc0007fe120, 0xc0007fe0f0)
/go/pkg/mod/k8s.io/[email protected]/tools/record/event.go:288 +0xaa
created by k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
/go/pkg/mod/k8s.io/[email protected]/tools/record/event.go:286 +0x6e
goroutine 180 [chan send]:
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startLeaderElection.func1()
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:628 +0x8c
k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc0005f8120)
/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:200 +0x40
k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0005f8120, 0x1d314a0, 0xc00012b540)
/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:209 +0x119
created by sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startLeaderElection
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:649 +0x217
goroutine 230 [chan receive, 1 minutes]:
k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc0004709c0)
/go/pkg/mod/k8s.io/[email protected]/util/workqueue/queue.go:198 +0xac
created by k8s.io/client-go/util/workqueue.newQueue
/go/pkg/mod/k8s.io/[email protected]/util/workqueue/queue.go:58 +0x132
goroutine 175 [chan receive, 1 minutes]:
k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x1d04aa0, 0xc0007fe150, 0xc0007d6e70)
/go/pkg/mod/k8s.io/[email protected]/tools/record/event.go:288 +0xaa
created by k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
/go/pkg/mod/k8s.io/[email protected]/tools/record/event.go:286 +0x6e
Expected behavior
No fatal error.
Steps to reproduce the bug:
I simply installed the chart on my EKS cluster, 1.18.
Anything else we need to know?:
The Pod hasn't restarted while I was writing this bug report, so the total restarts is still 10 currently.
However, I noticed the restarts while I was doing a helm upgrade
and setting installCRDs
to true
. So, maybe the fatal error is related to the -previously- missing CRDs? (that's pure conjecture though)
Environment details::
0.2.0
, chart 0.1.2
/kind bug
Chart exists but most flags are not accessable.
MVP:
Describe the bug:
This StackOverflow answer explains differences between GNU and BSD/macOS sed
. The current Makefile uses sed
in a way that causes an error with BSD/macOS sed
:
± make crds-to-chart
cp deploy/crds/*.yaml deploy/charts/secret-manager/templates/crds/; \
for i in deploy/charts/secret-manager/templates/crds/*.yaml; do \
sed -i '1s/.*/{{- if .Values.installCRDs }}/;$a{{- end }}' $i; \
done
sed: 1: "deploy/charts/secret-ma ...": extra characters at the end of d command
sed: 1: "deploy/charts/secret-ma ...": extra characters at the end of d command
sed: 1: "deploy/charts/secret-ma ...": extra characters at the end of d command
make: *** [crds-to-chart] Error 1
Expected behavior
Expect make targets to work across OSes.
Steps to reproduce the bug:
Run make crds-to-chart
on macOS.
Workaround
macOS users can install GNU sed using brew install sed
then following the instructions to setup $PATH
to prefer GNU sed
over macOS sed
.
/kind bug
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.