Git Product home page Git Product logo

vault-secrets-operator's Introduction



Create Kubernetes secrets from Vault for a secure GitOps based workflow.

The Vault Secrets Operator creates Kubernetes secrets from Vault. The idea behind the Vault Secrets Operator is to manage secrets in Kubernetes cluster using a secure GitOps based workflow. For more information about a secure GitOps based workflow I recommend the article "Managing Secrets in Kubernetes" from Weaveworks. With the help of the Vault Secrets Operator you can commit your secrets to your git repository using a custom resource. If you apply these secrets to your Kubernetes cluster the Operator will lookup the real secret in Vault and creates the corresponding Kubernetes secret. If you are using something like Sealed Secrets for this workflow the Vault Secrets Operator can be used as replacement for this.

Installation

The Vault Secrets Operator can be installed via Helm. A list of all configurable values can be found here. The chart assumes a vault server running at http://vault:8200, but can be overidden by specifying --set vault.address=https://vault.example.com

helm repo add ricoberger https://ricoberger.github.io/helm-charts
helm repo update

helm upgrade --install vault-secrets-operator ricoberger/vault-secrets-operator

Prepare Vault

The Vault Secrets Operator supports the KV Secrets Engine - Version 1 and KV Secrets Engine - Version 2. To create a new secret engine under a path named kvv1 and kvv2, you can run the following command:

vault secrets enable -path=kvv1 -version=1 kv
vault secrets enable -path=kvv2 -version=2 kv

After you have enabled the secret engine, create a new policy for the Vault Secrets Operator. The operator only needs read access to the paths you want to use for your secrets. To create a new policy with the name vault-secrets-operator and read access to the kvv1 and kvv2 path, you can run the following command:

cat <<EOF | vault policy write vault-secrets-operator -
path "kvv1/*" {
  capabilities = ["read"]
}

path "kvv2/data/*" {
  capabilities = ["read"]
}
EOF

To access Vault, the operator can choose between:

In the next sections you can find the instructions to setup Vault for the authentication methods.

Token Auth Method

To use Token auth method for the authentication against the Vault API, you need to create a token. A token with the previously created policy can be created as follows:

vault token create -period=24h -policy=vault-secrets-operator

To use the created token you need to pass the token as an environment variable to the operator. For security reasons the operator only supports the passing of environment variables via a Kubernetes secret. The secret with the keys VAULT_TOKEN and VAULT_TOKEN_LEASE_DURATION (as well as optional keys VAULT_TOKEN_RENEWAL_INTERVAL and VAULT_TOKEN_RENEWAL_RETRY_INTERVAL to control timings for token renewals, if required. VAULT_RENEW_TOKEN can also be set to "false" to disable the operators token renew loop) can be created with the following command:

export VAULT_TOKEN=
export VAULT_TOKEN_LEASE_DURATION=86400

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: vault-secrets-operator
type: Opaque
data:
  VAULT_TOKEN: $(echo -n "$VAULT_TOKEN" | base64)
  VAULT_TOKEN_LEASE_DURATION: $(echo -n "$VAULT_TOKEN_LEASE_DURATION" | base64)
EOF

This creates a secret named vault-secrets-operator. To use this secret in the Helm chart modify the values.yaml file as follows:

environmentVars:
  - name: VAULT_TOKEN
    valueFrom:
      secretKeyRef:
        name: vault-secrets-operator
        key: VAULT_TOKEN
  - name: VAULT_TOKEN_LEASE_DURATION
    valueFrom:
      secretKeyRef:
        name: vault-secrets-operator
        key: VAULT_TOKEN_LEASE_DURATION

Kubernetes Auth Method

The recommended way to authenticate is the Kubernetes auth method, which requires a service account for communication between Vault and the Vault Secrets Operator. If you installed the operator via Helm this service account is created for you. The name of the created service account is vault-secrets-operator. Use the following commands to set the environment variables for the activation of the Kubernetes auth method:

export VAULT_SECRETS_OPERATOR_NAMESPACE=$(kubectl get sa vault-secrets-operator -o jsonpath="{.metadata.namespace}")
export VAULT_SECRET_NAME=$(kubectl get secret vault-secrets-operator -o jsonpath="{.metadata.name}")
export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SECRET_NAME -o jsonpath="{.data.token}" | base64 --decode; echo)
export SA_CA_CRT=$(kubectl get secret $VAULT_SECRET_NAME -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
export K8S_HOST=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')

# Verify the environment variables
env | grep -E 'VAULT_SECRETS_OPERATOR_NAMESPACE|VAULT_SECRET_NAME|SA_JWT_TOKEN|SA_CA_CRT|K8S_HOST'

# To discover the service account issuer the following commands can be used:
kubectl proxy
curl --silent http://127.0.0.1:8001/api/v1/namespaces/default/serviceaccounts/default/token -H "Content-Type: application/json" -X POST -d '{"apiVersion": "authentication.k8s.io/v1", "kind": "TokenRequest"}' | jq -r '.status.token' | cut -d . -f2 | base64 -D

Enable the Kubernetes auth method at the default path (auth/kubernetes) and finish the configuration of Vault:

vault auth enable kubernetes

# Tell Vault how to communicate with the Kubernetes cluster
vault write auth/kubernetes/config \
  issuer="https://kubernetes.default.svc.cluster.local" \
  token_reviewer_jwt="$SA_JWT_TOKEN" \
  kubernetes_host="$K8S_HOST" \
  kubernetes_ca_cert="$SA_CA_CRT"

# Create a role named, 'vault-secrets-operator' to map Kubernetes Service Account to Vault policies and default token TTL
vault write auth/kubernetes/role/vault-secrets-operator \
  bound_service_account_names="vault-secrets-operator" \
  bound_service_account_namespaces="$VAULT_SECRETS_OPERATOR_NAMESPACE" \
  policies=vault-secrets-operator \
  ttl=24h

# If you're running Vault inside Kubernetes, you can alternatively exec into any Vault pod and run this...
# In modern versions of vault, the token / host / CA cert will be fetched automatically.
# https://github.com/hashicorp/vault-plugin-auth-kubernetes/issues/121#issuecomment-1046949951
# vault write auth/kubernetes/config \
#   kubernetes_host=https://${KUBERNETES_PORT_443_TCP_ADDR}:443

When you deploy the Vault Secrets Operator via Helm chart you have to set the vault.authMethod property to kubernetes in the values.yaml file, to use the Kubernetes auth method instead of the default Token auth methods.

vault:
  authMethod: kubernetes

AppRole Auth Method

To use AppRole auth method for the authentication against the Vault API, you need to create a new AppRole.

# Enable AppRole auth method:
vault auth enable approle

# AppRole with the previously created policy can be created as follows:
vault write auth/approle/role/vault-secrets-operator \
  token_policies=vault-secrets-operator

# Get AppRole ID:
vault read auth/approle/role/vault-secrets-operator/role-id

# Create a new secret for AppRole:
vault write -f auth/approle/role/vault-secrets-operator/secret-id

Use the following commands to set the environment variables for the activation of the AppRole auth method:

export VAULT_AUTH_METHOD=approle
export VAULT_ROLE_ID=
export VAULT_SECRET_ID=
export VAULT_TOKEN_MAX_TTL=86400

When you deploy the Vault Secrets Operator via Helm chart you have to set the vault.authMethod property to approle in the values.yaml file, to use the AppRole auth method instead of the default Token auth method.

vault:
  authMethod: approle

Set VAULT_TOKEN_MAX_TTL (default: 16 days) same or lower than the token_max_ttl of the AppRole (Vault default: 32 days) to ensure reauthentication in time. Mounting the vault ROLE_ID and SECRET_ID secrets as volumes is supported. It requires image.volumeMounts to be populated, VAULT_ROLE_ID_PATH and VAULT_SECRET_ID_PATH to be set in environmentVars(or export as shell variables), and volumes to be populated. See example below:

NOTE: image.volumeMounts[].mountPath must match environmentVars[].value for the respective ROLE_ID or SECRET_ID. Reference Kubernetes: Using Secrets as files from a Pod

image:
  volumeMounts:
    - name: vault-role-id
      mountPath: "/etc/vault/role/"
      readOnly: true
    - name: vault-secret-id
      mountPath: "/etc/vault/secret/"
      readOnly: true
environmentVars:
  - name: VAULT_ROLE_ID_PATH
    value: "/etc/vault/role/id"
  - name: VAULT_SECRET_ID_PATH
    value: "/etc/vault/secret/id"
volumes:
  - name: vault-role-id
    secret:
      secretName: vault-secrets-operator
      items:
        - key: VAULT_ROLE_ID
          path: "id"
  - name: vault-secret-id
    secret:
      secretName: vault-secrets-operator
      items:
        - key: VAULT_SECRET_ID
          path: "id"

Username and Password

To use username and password auth method for the authentication against the Vault API, you need to create a new user.

# Enable userpass auth method
vault auth enable userpass

# Create a user if not already created
vault write auth/userpass/users/<username> password=<password> policies=<policies>

Use the following commands to set the environment variables for the activation of the UserPass auth method:

export VAULT_AUTH_METHOD=userpass
export VAULT_USER=
export VAULT_PASSWORD=
export VAULT_TOKEN_MAX_TTL=120

When you deploy the Vault Secrets Operator via Helm chart you have to set the vault.authMethod property to userpass in the values.yaml file, to use the UserPass auth method instead of the default Token auth method.

vault:
  authMethod: userpass

AWS Auth Method

You can use either ec2 or iam auth types on eks clusters to authenticate against the Vault API. here Then you can enable the auth method with the following environment variables:

export VAULT_AUTH_METHOD=aws
export VAULT_AWS_PATH=auth/aws
export VAULT_AWS_ROLE=vault-secrets-operator
export VAULT_AWS_AUTH_TYPE=iam

If you deploy the Vault Secrets Operator via Helm you have to set the vault.authMethod, vault.awsPath, vault.awsRole and vault.awsAuthType values in the values.yaml file.

Azure Auth Method

You can use the managed system identity provided on aks cluster to authenticate against the Vault API, to do that you will need to setup an auth backend as described here Then you can setup the auth method with the following environment variables:

export VAULT_AUTH_METHOD=azure
export VAULT_AZURE_PATH=auth/azure
export VAULT_AZURE_ROLE=default
export VAULT_AZURE_ISSCALESET=true # Set this to true if the kubernetes nodes are in a vmss and not isolated vm (default in aks)

If you deploy the Vault Secrets Operator via Helm you have to set the vault.authMethod, vault.azurepath, vault.azureRole, vault.azureScaleset values in the values.yaml file.

GCP Auth Method

You can use either gce or iam auth types on gke clusters to authenticate against the Vault API. here Then you can enable the auth method with the following environment variables:

export VAULT_AUTH_METHOD=gcp
export VAULT_GCP_PATH=auth/gcp
export VAULT_GCP_ROLE=vault-secrets-operator
export VAULT_GCP_AUTH_TYPE=iam

If you deploy the Vault Secrets Operator via Helm you have to set the vault.authMethod, vault.gcpPath, vault.gcpRole and vault.gcpAuthType values in the values.yaml file.

Usage

Secret Engine

Create two Vault secrets example-vaultsecret:

vault kv put kvv1/example-vaultsecret foo=bar hello=world

vault kv put kvv2/example-vaultsecret foo=bar
vault kv put kvv2/example-vaultsecret hello=world
vault kv put kvv2/example-vaultsecret foo=bar hello=world

Deploy the custom resource kvv1-example-vaultsecret to your Kubernetes cluster:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kvv1-example-vaultsecret
spec:
  keys:
    - foo
  path: kvv1/example-vaultsecret
  type: Opaque

The Vault Secrets Operator creates a Kubernetes secret named kvv1-example-vaultsecret with the type Opaque from this CR:

apiVersion: v1
data:
  foo: YmFy
kind: Secret
metadata:
  labels:
    created-by: vault-secrets-operator
  name: kvv1-example-vaultsecret
type: Opaque

You can also omit the keys spec to create a Kubernetes secret which contains all keys from the Vault secret:

apiVersion: v1
data:
  foo: YmFy
  hello: d29ybGQ=
kind: Secret
metadata:
  labels:
    created-by: vault-secrets-operator
  name: kvv1-example-vaultsecret
type: Opaque

To deploy a custom resource kvv2-example-vaultsecret, which uses the secret from the KV Secrets Engine - Version 2 you can use the following:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kvv2-example-vaultsecret
spec:
  path: kvv2/example-vaultsecret
  type: Opaque

The Vault Secrets Operator will create a secret which looks like the following:

apiVersion: v1
data:
  foo: YmFy
  hello: d29ybGQ=
kind: Secret
metadata:
  labels:
    created-by: vault-secrets-operator
  name: kvv2-example-vaultsecret
type: Opaque

For secrets using the KVv2 secret engine you can also specify the version of the secret you want to deploy:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kvv2-example-vaultsecret
spec:
  path: kvv2/example-vaultsecret
  type: Opaque
  version: 2

The resulting Kubernetes secret will be:

apiVersion: v1
data:
  hello: d29ybGQ=
kind: Secret
metadata:
  labels:
    created-by: vault-secrets-operator
  name: kvv2-example-vaultsecret
type: Opaque

The spec.type and spec.keys fields are handled in the same way for both versions of the KV secret engine. The spec.version field is only processed, when the secret is saved under a KVv2 secret engine. If you specified the VAULT_RECONCILIATION_TIME environment variable with a value greater than 0 every secret is reconciled after the given time (in seconds). This means, when you do not specify spec.version, the Kubernetes secret will be automatically updated if the Vault secret changes. To set the VAULT_RECONCILIATION_TIME environment variable in the Helm chart the vault.reconciliationTime value can be used.

The binary data stored in vault requires base64 encoding. the spec.isBinary can be used to prevent such data get base64 encoded again when store as secret in k8s.

For example, let's set foo to the bar in base64 encoded format (i.e. YmFyCg==).

vault kv put kvv1/example-vaultsecret foo=YmFyCg==

You can specify spec.isBinary to indicate this is a binary data which is already in base64 encoded format:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kvv1-example-vaultsecret
spec:
  keys:
    - foo
  isBinary: true
  path: kvv1/example-vaultsecret
  type: Opaque

The resulting Kubernetes secret will be:

apiVersion: v1
data:
  foo: YmFyCg==
kind: Secret
metadata:
  labels:
    created-by: vault-secrets-operator
  name: kvv1-example-vaultsecret
type: Opaque

The value for foo stays as YmFyCg== which does not get base64 encoded again.

It is also possible to change the default reconciliation strategy from Replace to Merge via the reconcileStrategy key in the CRD. For the default Replace strategy the complete secret is replaced. If you have an existing secret you can choose the Merge strategy to add the keys from Vault to the existing secret.

Using templated secrets

When straight-forward secrets are not sufficient, and the target secrets need to be formatted in a certain way, you can use basic templating to format the secrets. There are multiple uses for this:

To do this, specify keys under spec.templates, containing a valid template string. When templates is defined, the standard generation of secrets is disabled, and only the defined templates will be generated.

The templating uses the standard Go templating engine, also used in tools such as Helm or Gomplate. The main differentiator here is that the {% and %} delimiters are used to prevent conflicts with standard Go templating tools such as Helm, which use {{ and }} for this.

The available functions during templating are the set offered by the Sprig library (similar to Helm, but different from Gomplate), excluding the following functions for security-reasons or their non-idempotent nature to avoid reconciliation problems:

  • genPrivateKey
  • genCA
  • genSelfSignedCert
  • genSignedCert
  • htpasswd
  • getHostByName
  • Random functions
  • Date/time functionality
  • Environment variable functions (for security reasons)

Templating context

The context available in the templating engine contains the following items:

  • .Secrets: Map with all the secrets fetched from vault. Key = secret name, Value = secret value
  • .Vault: Contains misc info about the Vault setup
    • .Vault.Address: configured address of the Vault instance
    • .Vault.Path: path of the Vault secret that was fetched
  • .Namespace: Namespace where the custom resource instance was deployed.
  • .Labels: access to the labels of the custom resource instance
  • .Annotations: access to the annotations of the custom resource instance

Examples

An example of a URI formatting secret:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kvv1-example-vaultsecret
  annotations:
    redisdb: "0"
spec:
  keys:
    - foo
    - bar
  path: kvv1/example-vaultsecret
  templates:
    fooUri: "https://user:{% .Secrets.foo %}@{% .Namespace %}.somesite.tld/api"
    barUri: "redis://{% .Secrets.bar %}@redis/{% .Annotations.redisdb %}"
  type: Opaque

The resulting secret will look like:

apiVersion: v1
data:
  fooUri: aHR0cHM6Ly91c2VyOmZvb0BuYW1lc3BhY2UuLnNvbWVzaXRlLnRsZC9hcGkK
  barUri: cmVkaXM6Ly9iYXJAcmVkaXMvMAo=
kind: Secret
metadata:
  labels:
    created-by: vault-secrets-operator
  name: kvv1-example-vaultsecret
type: Opaque

This is a more advanced example for a secret that can be used by HelmOperator as valuesFrom[].secretKeyRef:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kvv1-example-vaultsecret
spec:
  keys:
    - foo
    - bar
    - baz
  path: kvv1/example-vaultsecret
  templates:
    values.yaml: |-
      secrets:
      {%- range $k, $v := .Secrets %}
        {% $k %}: {% $v | quote -%}
      {% end %}
  type: Opaque

This will loop over all secrets fetched from Vault, and set the vault.yaml key to a string like this:

secrets:
  foo: "foovalue"
  bar: "barvalue"
  baz: "bazvalue

Notes on templating

  • All secrets data is converted to string before being passed to the templating engine, so using binary data will not work well, or at least be unpredictable.

PKI Engine

You can generate certificates using the PKI Engine like so:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: test-pki
spec:
  path: pki
  secretEngine: pki
  role: example-dot-com
  engineOptions:
    common_name: www.my-website.com
    ttl: "5d"
  type: Opaque

You can pass any of the parameters supported by the PKI engine in engineOptions. A list is available here: https://www.vaultproject.io/api-docs/secret/pki#parameters-15

This will generate the following secret:

apiVersion: v1
kind: Secret
metadata:
  name: test-pki
data:
  certificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tLi4u
  expiration: MTY0OTc2OTIwMg==
  issuing_ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tLi4u
  private_key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLS4uLg==
  private_key_type: cnNh
  serial_number: MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA6MDA=
type: Opaque

Like with the secret engine, you can template the resulting Kubernetes secret:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: test-pki
spec:
  path: pki
  secretEngine: pki
  role: example-dot-com
  engineOptions:
    common_name: www.my-website.com
  templates:
    tls.crt: "{% .Secrets.certificate %}"
    tls.key: "{% .Secrets.private_key %}"
    ca.crt: "{% .Secrets.issuing_ca %}"
  type: Opaque

The following fields are available:

  • certificate
  • expiration
  • issuing_ca
  • private_key
  • private_key_type
  • serial_number

Certificate Renewal

Certificate are renewed before expiration. You can set how long before expiration you want to renew by setting the VAULT_PKI_RENEW environment variable. The default is 1 hour.

Using specific Vault Role for secrets

It is possible to not set the VAULT_KUBERNETES_ROLE (vault.kubernetesRole value in the Helm chart) and instead specify the Vault Role at the CR. This allows you to to use different Vault Roles within one Vault Secrets Operator instance.

The Vault Role is set via the vaultRole property in the VaultSecret CR:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kvv1-example-vaultsecret
spec:
  vaultRole: my-custom-vault-role
  path: kvv1/example-vaultsecret
  type: Opaque

Note: This option is only available for the kubernetes auth method and all roles must be added to the auth method before they are used by the operator.

Using Vault Namespaces

Vault Namespaces is a set of features within Vault Enterprise that allows Vault environments to support Secure Multi-tenancy (or SMT) within a single Vault infrastructure.

The Vault Namespace, which should be used for the authentication of the operator against Vault can be specified via the VAULT_NAMESPACE environment variable. In the Helm chart this value can be provided as follows:

environmentVars:
  - name: VAULT_NAMESPACE
    value: "my/root/ns"

The operator also supports nested Namespaces. When the VAULT_NAMESPACE is set, it is also possible to specify a namespace via the vaultNamespace field in the VaultSecret CR:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kvv1-example-vaultsecret
spec:
  vaultNamespace: team1
  path: kvv1/example-vaultsecret
  type: Opaque

The Vault Namespace, which is used to get the secret in the above example will be my/root/ns/team1.

The operator can also be restricted to only reconcile secrets where the spec.vaultNamespace field is the same as the VAULT_NAMESPACE environment variable. For this the VAULT_RESTRICT_NAMESPACE environment variable must be set to true. When this feature is enabled the operator can not be used with nested namespaces.

Propagating labels

The operator will propagate all labels found on the VaultSecret to the actual secret. So if a given label is needed on the resulting secret it can be added like in the following example:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: example-vaultsecret
  labels:
    my-custom-label: my-custom-label-value
spec:
  path: path/to/example-vaultsecret
  type: Opaque

This would result in the following secret:

apiVersion: v1
data:
  ...
kind: Secret
metadata:
  labels:
    created-by: vault-secrets-operator
    my-custom-label: my-custom-label-value
  name: example-vaultsecret
type: Opaque

Development

After modifying the *_types.go file always run the following command to update the generated code for that resource type:

make generate

The above Makefile target will invoke the controller-gen utility to update the api/v1alpha1/zz_generated.deepcopy.go file to ensure our API's Go type definitons implement the runtime.Object interface that all Kind types must implement.

Once the API is defined with spec/status fields and CRD validation markers, the CRD manifests can be generated and updated with the following command:

make manifests

This Makefile target will invoke controller-gen to generate the CRD manifest at charts/vault-secrets-operator/crds/ricoberger.de_vaultsecrets.yaml.

Deploy the CRD and run the operator locally with the default Kubernetes config file present at $HOME/.kube/config:

export VAULT_ADDRESS=
export VAULT_AUTH_METHOD=token
export VAULT_TOKEN=
export VAULT_TOKEN_LEASE_DURATION=86400
export VAULT_RECONCILIATION_TIME=180

make run

Links

vault-secrets-operator's People

Contributors

aluxima avatar billimek avatar chumper avatar dependabot[bot] avatar ecojan avatar ejsuncy avatar ilger avatar ilyas28 avatar jaredallard avatar jcotineau avatar kampe avatar manning-ncsa avatar margrs avatar moertel avatar mtougeron avatar mvaalexp avatar nicoche avatar nikhiljha avatar pale-whale avatar patrickkoss avatar pbchekin avatar pluggi avatar ricoberger avatar rxbn avatar ryanfernandes09 avatar slm0n87 avatar tehlers320 avatar terev avatar yanyixing avatar zarmbinski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vault-secrets-operator's Issues

Kubernetes auth method fails to login with code 400: missing client token

I'm using vault-secrets-operator version 1.15.0 with EKS v1.20

I configured everything according to the documentation, however it still fails with

{"level":"info","ts":1630501402.0604184,"logger":"vault","msg":"Reconciliation is enabled.","ReconciliationTime":0}                                                                                                                                      
{"level":"error","ts":1630501402.1436677,"msg":"Could not create API client for Vault","error":"Error making API request.

URL: PUT https://myvault.server/v1/auth/kubernetes/login
Code: 400. Errors:
* missing client token",
"stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nmain.main\n\t/workspace/main.go:56\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}

I think the operator should print it's configuration upon start. All I could see is the env variables were configured properly. I only changed the authMethod: kubernetes and configured everything on the Vault side. Also, the vault <--> eks can communicate (I validated it using curl).

Any idea what else this can be?

Applications error

Hi.
I get errors and POD restart. Number of replicas 1, authMethod: kubernetes, reconciliationTime: 0. The application restarts constantly, but secrets create normaly.
E1126 03:10:39.646091 1 leaderelection.go:367] Failed to update lock: Put "https://10.57.0.1:443/api/v1/namespaces/vault/configmaps/vaultsecretsoperator.ricoberger.de": context deadline exceeded I1126 03:10:39.646152 1 leaderelection.go:283] failed to renew lease vault/vaultsecretsoperator.ricoberger.de: timed out waiting for the condition {"level":"error","ts":1637896239.6461787,"logger":"setup","msg":"problem running manager","error":"leader election lost"}

truststore","error":"invalid secret data",

Hi there, I want to use VSO in our environment. Previous version is working fine but newer version (1.15.1), is giving following error-

"level":"error","ts":1634234176.9586139,"logger":"controllers.VaultSecret","msg":"Could not get secret from vault","vaultsecret":"pz-pz-nint111-qa/truststore","error":"invalid secret data","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\ngithub.com/ricoberger/vault-secrets-operator/controllers.(*VaultSecretReconciler).Reconcile\n\t/workspace/controllers/vaultsecret_controller.go:117\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99"}

Support Nested Secret Values

I am looking to use the vault-secrets-operator to support secrets in flux.

Secrets with nesting don't seem to work.

{
  "values.yaml": {
    "postgres": {
      "superuserPassword": "secret",
      "replicationPassword": "secret"
    }
  }
}

In vault the secret looks like this;

======= Data =======
Key            Value
---            -----
values.yaml    map[postgres:map[replicationPassword:secret superuserPassword:secret]]

Unable to create Service Monitor object if all namespaces are watched

If no specific namespaces are defined, the value for the "watchNamespaces" === "":

namespace, err := k8sutil.GetWatchNamespace()

namespace, err := k8sutil.GetWatchNamespace()

...somewhat later...

if namespace == "" {
	mgrOpts.Namespace = namespace
	log.Info("Watch all namespaces")
} else {

This namespace variable eventually gets used to create its service monitor:

// CreateServiceMonitors will automatically create the prometheus-operator ServiceMonitor resources
// necessary to configure Prometheus to scrape metrics from this operator.
services := []*v1.Service{service}
_, err = metrics.CreateServiceMonitors(cfg, namespace, services)

In our example, namespace equals an empty string and this is causing the following error:

an empty namespace may not be set during creation

{"level":"info","ts":1588909453.9949079,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1588909453.995184,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":
"vaultsecret-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1588909453.9953754,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller"
:"vaultsecret-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1588909455.9308386,"logger":"metrics","msg":"Metrics Service object created","Service.Name":"vault-se
crets-operator-metrics","Service.Namespace":"vault"}
{"level":"error","ts":1588909456.8837671,"logger":"cmd","msg":"Could not create ServiceMonitor object","error":"an empty n
amespace may not be set during creation","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/runner/go/pkg/m
od/github.com/go-logr/[email protected]/zapr.go:128\nmain.main\n\tvault-secrets-operator/cmd/manager/main.go:196\nruntime.main\n
\t/opt/hostedtoolcache/go/1.13.10/x64/src/runtime/proc.go:203"}

Expected behavior:

For the service monitor to be created in the namespace of the operator itself.

Ideal behavior:

Do not let the go application code create these kubernetes resources and let end-users manage those themselves. Only expose the metrics port on the go application. The way the operator is built right now, we're unable to add any additional labels/config/annotations to the service monitor object itself. Making it unusuable in non-standard setups of prometheus.

imagePullSecrets for GCR - gcp secrets engine

Hi all,

I was looking at #54 to create imagePullSecrets, and that looks like it might work, but the secret that I am trying to access is not a "kv" type. The credential comes from the gcp secrets engine. So, as my goal is to get a secret (imagePullSecrets) to access GCR, would it be better to try to hack at this code to use the GCP secrets engine in Vault, or to hack at something else like the vault agent to create a kubernetes secret?

Errors: namespace not authorized

Hello, I recently upgraded my vault-secrets-operator to version 1.11.0 to test the new vaultRole field on the VaultSecret CR.
I get the below error by doing so.
2021-01-05T17:20:50.449Z INFO controllers.VaultSecret Create client to get secret from Vault {"vaultsecret": "mynamespace/testing-secret", "vaultRole": "myrole"} 2021-01-05T17:20:54.197Z ERROR controller Reconciler error {"reconcilerGroup": "ricoberger.de", "reconcilerKind": "VaultSecret", "controller": "vaultsecret", "name": "testing-secret", "namespace": "mynamespace", "error": "Error making API request.\n\nURL: PUT https://vault-k8s.example.com:8200/v1/auth/test-eu/login\nCode: 500. Errors:\n\n* namespace not authorized"}

note:

  1. My vault-secrets-operator configuration looks as below
    vault: address: "https://vault-k8s.example.com:8200" authMethod: kubernetes tokenPath: "" kubernetesPath: auth/test-eu kubernetesRole: "" reconciliationTime: 0 namespaces: ""

I kept the kubernetesRole value empty as per the updated documentation here https://github.com/ricoberger/vault-secrets-operator#using-specific-vault-role-for-secrets to skip the usage of the shared client.

  1. The vault role I am testing myrole is properly bound to the namespace mynamespace where I am trying to deploy my CR and expecting the k8s secret to turn up.

Kubernetes auth fails after a period of time passes

We've noticed that the operator starts failing with permission denied (403) errors after a period of time passes when using the kubernetes auth mechanism. Restarting the pod fixes the issue however, is there a setting that we can change somewhere so that we don't need to do this?

{"level":"error","ts":1616687328.4176793,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"vaultsecret-controller","request":"monitoring/mysql-exporter","error":"Error making API request.\n\nURL: GET https://*removed*/v1/sys/internal/ui/mounts/web-eu-production/mysql-exporter-runtimedb\nCode: 403. Errors:\n\n* permission denied","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/runner/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"}

Using vault-secrets-operator with custom CA

Hello!
I have vault hosted with ssl certificate issued by a custom CA. How can I add CA certificates to containter? Right now vault-secrets-operator throws following error
2020-12-30T08:31:26.942Z ERROR Could not create API client for Vault {"error": "Put https://hostname: x509: certificate signed by unknown authority"}

Howto configure custom CA

I want to configure vault-secrets-operator to use my custom CA when connecting to Vault but without using mTLS (so no client cert and key). I've followed instructions in #28 , created a secret with the ca file and corresponding ENV mapping.

Secret:

---
apiVersion: v1
kind: Secret
metadata:
  name: ca-pem-vault-secrets-operator
  namespace: ops
data:
  ca-pem: ...

ENV mapping:

            - name: VAULT_CACERT
              valueFrom:
                secretKeyRef:
                  name: ca-pem-vault-secrets-operator
                  key: ca-pem

When I try to deploy this I'm getting the following error:

{"level":"error","ts":1588665097.7746422,"logger":"cmd","msg":"Could not create API client for Vault","error":"error encountered setting up default configuration: Error loading CA File: open -----BEGIN CERTIFICATE-----<cert data here>-----END CERTIFICATE-----\n: no such file or directory","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/runner/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nmain.main\n\tvault-secrets-operator/cmd/manager/main.go:88\nruntime.main\n\t/opt/hostedtoolcache/go/1.13.8/x64/src/runtime/proc.go:203"}

So it looks like the VAULT_CACERT should point to a file (see the Error loading CA File: open)

When I do a volume mount and set the VAULT_CACERT value to file with the CA the error message is:

{"level":"error","ts":1588665658.6766431,"logger":"cmd","msg":"Could not create API client for Vault","error":"no certs found in root CA file","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/runner/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nmain.main\n\tvault-secrets-operator/cmd/manager/main.go:88\nruntime.main\n\t/opt/hostedtoolcache/go/1.13.8/x64/src/runtime/proc.go:203"}

Any ideas what I'm missing here?

K8s auth method issue

Hi @ricoberger,

I am trying to setup the vault operator with the following command

helm3 upgrade --install vault-secrets-operator ricoberger/vault-secrets-operator --set vault.address=https://vault-staging.tools.domain --set vault.authMethod=kubernetes

It gives the following error

k logs vault-secrets-operator-5f5769c44f-mlwnc
{"level":"info","ts":1605861477.757762,"logger":"cmd","msg":"Version: 1.9.0"}
{"level":"info","ts":1605861477.7577815,"logger":"cmd","msg":"Branch: HEAD"}
{"level":"info","ts":1605861477.7577853,"logger":"cmd","msg":"Revision: f7a50a8cb9161282970b142448cff7cbe2f8dea4"}
{"level":"info","ts":1605861477.7577894,"logger":"cmd","msg":"Go Version: go1.13.15"}
{"level":"info","ts":1605861477.757793,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1605861477.757796,"logger":"cmd","msg":"Version of operator-sdk: v0.18.0"}
{"level":"info","ts":1605861477.7577991,"logger":"cmd","msg":"Build User: runner"}
{"level":"info","ts":1605861477.7578025,"logger":"cmd","msg":"Build Date: 2020-11-14@15:10:24"}
{"level":"info","ts":1605861477.7578192,"logger":"vault","msg":"Reconciliation is enabled.","ReconciliationTime":0}
{"level":"error","ts":1605861477.8411367,"logger":"cmd","msg":"Could not create API client for Vault","error":"Error making API request.\n\nURL: PUT https://vault-staging.tools.domain/v1/auth/kubernetes/login\nCode: 400. Errors:\n\n* missing client token","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/runner/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nmain.main\n\tvault-secrets-operator/cmd/manager/main.go:85\nruntime.main\n\t/opt/hostedtoolcache/go/1.13.15/x64/src/runtime/proc.go:203"}

Why is it asking for the token if i have setup the K8s auth method after doing the steps from https://github.com/ricoberger/vault-secrets-operator#kubernetes-auth-method?

Kevin

[Feature request] Multiple types of secrets

Hello, I stumbled upon this repo while looking for solutions with vault secret syncing, until now this looks like the most classy way of doing this but the only question I have is if there are any plans in the near future to support different types of secrets (files, tls, docker artifactory).

Support for multiple vault namespaces

I do not see where to set the vault namespace that secrets are pulled from. Would it be possible to add vault_namespace to spec and use this in calls to execute client.SetNamespace?

I have a use case where I have split my secrets across multiple vault namespaces to group them. I would like to use a single role to access these secrets but need a way to set the correct vault namespace for each secret.

Using the k8s auth method seeing issues around auth

Hello!

I'm attempting to use the k8s auth method and am seeing strange behavior based on the ttl it seems and I'm wondering what's happening. I'm manually setting my ttl in the deployment of the operator as such

            - name: VAULT_TOKEN_LEASE_DURATION
              value: "7200"
            - name: VAULT_CACERT
              value: "/etc/vault-secrets-operator/ca.pem"
            - name: VAULT_AUTH_METHOD
              value: "kubernetes"

However when the operator pod starts, it errors and I get

{"level":"error","ts":1600131426.127547,"logger":"cmd","msg":"Could not create API client for Vault","error":"missing ttl for vault token","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/runner/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nmain.main\n\tvault-secrets-operator/cmd/manager/main.go:85\nruntime.main\n\t/opt/hostedtoolcache/go/1.13.12/x64/src/runtime/ 

Any idea what's going on here?

Claim iss is invalid

I'm trying to deploy vault-secrets-operator in my environment, but I have some issues.

I see the following message in de vault-secrets-operator pod:

{"level":"error","ts":1620239568.733615,"msg":"Could not create API client for Vault","error":"Error making API request.\n\nURL: PUT http://vault.vault.svc.cluster.local.:8200/v1/auth/kubernetes/login\nCode: 500. Errors:\n\n* claim \"iss\" is invalid","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nmain.main\n\t/workspace/main.go:56\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}

Similar message in the audit log of Vault:

{
  "time": "2021-05-05T18:21:00.261045593Z",
  "type": "request",
  "auth": {
    "token_type": "default"
  },
  "request": {
    "id": "5bb0b779-217f-af39-bfc2-d1c3da5e50d9",
    "operation": "update",
    "mount_type": "kubernetes",
    "namespace": {
      "id": "root"
    },
    "path": "auth/kubernetes/login",
    "data": {
      "jwt": "hmac-sha256:3ac6b5ce56ae139a3b8088b652683320040065d80e2245e2073dbe220673e172",
      "role": "hmac-sha256:ad9b9879a7db7d76ce1b9ed21e953dcecb53ce64fefddbc4636d2c20a900f7ed"
    },
    "remote_address": "10.42.0.168"
  }
}
{
  "time": "2021-05-05T18:21:00.261954708Z",
  "type": "response",
  "auth": {
    "token_type": "default"
  },
  "request": {
    "id": "5bb0b779-217f-af39-bfc2-d1c3da5e50d9",
    "operation": "update",
    "mount_type": "kubernetes",
    "namespace": {
      "id": "root"
    },
    "path": "auth/kubernetes/login",
    "data": {
      "jwt": "hmac-sha256:3ac6b5ce56ae139a3b8088b652683320040065d80e2245e2073dbe220673e172",
      "role": "hmac-sha256:ad9b9879a7db7d76ce1b9ed21e953dcecb53ce64fefddbc4636d2c20a900f7ed"
    },
    "remote_address": "10.42.0.168"
  },
  "response": {
    "mount_type": "kubernetes"
  },
  "error": "claim \"iss\" is invalid"
}

I'm using Helm chart version 1.14.3 with the following custom values:

    replicaCount: 1
    deploymentStrategy:
      type: Recreate
    vault:
      address: "http://vault.vault"
      authMethod: kubernetes
      kubernetesPath: auth/kubernetes
      kubernetesRole: vault-secrets-operator

Here is my policy:

path "secrets/data/k3s/*" {
  capabilities = ["read"]
}

I have tried with curl command and it works:

{
  "request_id": "6a9d35f9-6722-15e7-ff34-e8c7e5081274",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": null,
  "wrap_info": null,
  "warnings": null,
  "auth": {
    "client_token": "<redacted>",
    "accessor": "<redacted>",
    "policies": [
      "default",
      "vault-secrets-operator"
    ],
    "token_policies": [
      "default",
      "vault-secrets-operator"
    ],
    "metadata": {
      "role": "vault-secrets-operator",
      "service_account_name": "vault-secrets-operator",
      "service_account_namespace": "kube-system",
      "service_account_secret_name": "vault-secrets-operator-token-sl5cx",
      "service_account_uid": "<redacted>"
    },
    "lease_duration": 7200,
    "renewable": true,
    "entity_id": "<redacted>",
    "token_type": "service",
    "orphan": true
  }
}

Any idea why?

Issues applying the helm chart Role in namespace

In my current deployment scenario I am restricted to a cluster in which I am an admin in just a namespace. Current versions are:

kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:34:20Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:16:25Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1

When I'm trying to apply the Role, I receive the following error with the following verbs:

{APIGroups:[""], Resources:["leases"], Verbs:["create" "delete" "get" "list" "patch" "update" "watch"]}
{APIGroups:["coordination.k8s.io"], Resources:["configmaps"], Verbs:["create" "delete" "get" "list" "patch" "update" "watch"]}

I arrived at the conclusion that the cluster is trying to authorize the target with some API groups that don't match
ex. configmaps in "coordination.k8s.io".

Found the following problem lines

- apiGroups:
- coordination.k8s.io
- ""
resources:
- configmaps
- leases
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

Where 2 api groups are used at the same time.

After templating the chart, separating the API role descriptions and applying, everything seems to work as intended.

Unable to create ServiceMonitor

vault-secrets-operator is failing to create ServiceMonitor resources with the message:

{"level":"info","ts":1588604343.4207065,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"vaultsecret-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1588604345.485237,"logger":"metrics","msg":"Metrics Service object updated","Service.Name":"vault-secrets-operator-metrics","Service.Namespace":"ops"}
{"level":"error","ts":1588604346.4875667,"logger":"cmd","msg":"Could not create ServiceMonitor object","error":"an empty namespace may not be set during creation","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/runner/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nmain.main\n\tvault-secrets-operator/cmd/manager/main.go:196\nruntime.main\n\t/opt/hostedtoolcache/go/1.13.8/x64/src/runtime/proc.go:203"}
{"level":"info","ts":1588604346.4876773,"logger":"cmd","msg":"Starting the Cmd."}
{"level":"info","ts":1588604346.4881828,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}

The corresponding svcs was created without problem:

k get svc | grep vault-secret
vault-secrets-operator                    ClusterIP      10.58.6.27     <none>                              8080/TCP,8383/TCP,8686/TCP                                                         17h
vault-secrets-operator-metrics            ClusterIP      10.58.4.119    <none>                              8383/TCP,8686/TCP                                                                  17h

I'm running Prometheus v2.17.2 in the same namespace.

K8s version

Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.9-gke.24", GitCommit:"39e41a8d6b7221b901a95d3af358dea6994b4a40", GitTreeState:"clean", BuildDate:"2020-02-29T01:24:35Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}

Created secrets missing ownerReferences metadata

Since upgrading to vault-secrets-operator 1.10.0 from 1.9.0, vault-secrets-operator is no longer setting the ownerReferences metadata on the secrets it creates.

We use Argo CD for Git ops on Kubernetes clusters, and it relies on this metadata to know that the Secret is derived from the VaultSecret and therefore is an expected resource. Without this metadata, the secrets all show up as stray resources that need pruning.

(Thank you very much for maintaining this software. We use it heavily!)

Any plans to support Vault KV Secrets Engine - Version 2?

With version 2 of the kv secrets engine, Vault can store multiple versions of the same secret. The controller could then reference a specific version via the CRD spec and this would provide a means to reflect the fact that the secret value has been modified as a git commit.

Any thoughts on this?

Any plans to make the kustomize templates more comprehensive?

I noticed you included a kustomize config section in your repo which is great but it's missing a few things, such as healthchecks, env variables, vault-secrets-operator clusterrole etc. Of course that could be added by any user but it would be very helpful if they contained everything that's needed for it to work so it is easier to create overlays for them, etc without having to compare with what's on helm.

Overall great work, thank you!

Feature Request: Managed sub-paths of a secret

Hi 👋

In one of my use-cases (ArgoCD), I would like my vault-secret-operator to manage only one of the keys in an existing Secret object. Currently, if the Operator reconciles the entire Secret to only contain the specified keys in its own spec path.

It would be neat if we can specify in our VaultSecret spec to only reconcile the keys it knows and leave existing keys in the existing Secret.

Example existing Secret:

kind: Secret
apiVersion: v1
data:
  token: ++++++++

With my VaultSecret spec:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: gitlab-secret
spec:
  reconcileStrategy: Merge
  keys:
    - username
    - password
  path: kv/gitlab
  type: Opaque

With reconcileStrategy set to Merge:

The operator would only reconcile the fields that the VaultSecrets spec file specifies, allowing other systems to cohabitate that secrets resource.

kind: Secret
apiVersion: v1
data:
  token: ++++++++
  username: +++++++
  password: +++++++

The reconcileStrategy field

By default, we could let the strategy be Replace or something similarly named to describe its current behavior of a fully managed secret that gets completely replaced by the VaultSecret's keys.

While Merge or something similar could be smarter about only managing its own known keys.

Support for approle auth method for Vault

It would be nice to support approle auth method for Vault:
https://www.vaultproject.io/docs/auth/approle

It will require adding an additional authMethod approle. Also the following settings are needed:

  • VAULT_ROLE_ID
  • VAULT_SECRET_ID
  • VAULT_APP_ROLE_PATH
  • VAULT_TOKEN_MAX_TTL - a Vault token returned by the approle auth method has a maximum TTL (usually 30 days), so we can renew the issued token only during the specified time frame. After that we need to re-authenticate with VAULT_ROLE_ID and VAULT_SECRET_ID and receive a new token.

@ricoberger if you are OK with the feature I can make a PR for that.

Allow labels to be added to the resulting secret

I am currently working on setting up ArgoCD with the vault-secret-operator.

ArgoCD supports that credentials for git repository access can be declared beforehand:
https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#repository-credentials

Example:

apiVersion: v1
kind: Secret
metadata:
  name: first-repo
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  url: https://github.com/argoproj/private-repo

ArgoCD relies on the labels to determine that a secret belongs to an ArgoCD installation.
So I would like to request that (in the easiest implementation) we can labels to the secret as well.

For my usecase it should be sufficient to add labels as an additional field in the spec:

apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: example-secret
spec:
  keys:
    - url
  path: kvv1/example-vaultsecret
  type: Opaque
  labels:
    argocd.argoproj.io/secret-type: repository

Which would result in a secret like this:

apiVersion: v1
data:
  url: YmFyCg==
kind: Secret
metadata:
  labels:
    created-by: vault-secrets-operator
    argocd.argoproj.io/secret-type: repository
  name: example-secret
type: Opaque

I am not sure yet if this affects ArgoCD because the secrets are stored in data instead stringData but that should not be a concern here.

If needed I can try myself out by contributing this. I would expect it to be a rather simple change.

cannot renew token with k8s auth

i'm seeing this in the log

{"level":"info","ts":1636700411.6041362,"logger":"vault","msg":"Reconciliation is enabled.","ReconciliationTime":0}
{"level":"info","ts":1636700411.9992003,"logger":"vault","msg":"Renew Vault token"}
{"level":"info","ts":1636700416.65092,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1636700416.6526558,"logger":"setup","msg":"starting manager"}
I1112 07:00:16.653619       1 leaderelection.go:248] attempting to acquire leader lease vault-secrets-operator/vaultsecretsoperator.ricoberger.de...
{"level":"info","ts":1636700416.6541686,"msg":"starting metrics server","path":"/metrics"}
I1112 07:00:34.414285       1 leaderelection.go:258] successfully acquired lease vault-secrets-operator/vaultsecretsoperator.ricoberger.de
{"level":"info","ts":1636700434.4186609,"logger":"controller.vaultsecret","msg":"Starting EventSource","reconciler group":"ricoberger.de","reconciler kind":"VaultSecret","source":"kind source: /, Kind="}
{"level":"info","ts":1636700434.418883,"logger":"controller.vaultsecret","msg":"Starting EventSource","reconciler group":"ricoberger.de","reconciler kind":"VaultSecret","source":"kind source: /, Kind="}
{"level":"info","ts":1636700434.418923,"logger":"controller.vaultsecret","msg":"Starting Controller","reconciler group":"ricoberger.de","reconciler kind":"VaultSecret"}
I1112 07:01:02.924014       1 trace.go:205] Trace[1450029557]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 (12-Nov-2021 07:00:34.421) (total time: 28502ms):
Trace[1450029557]: ---"Objects listed" 28463ms (07:01:02.884)
Trace[1450029557]: [28.502624826s] [28.502624826s] END
{"level":"info","ts":1636700463.1399543,"logger":"controller.vaultsecret","msg":"Starting workers","reconciler group":"ricoberger.de","reconciler kind":"VaultSecret","worker count":1}
{"level":"info","ts":1636700463.140331,"logger":"controllers.VaultSecret","msg":"Use shared client to get secret from Vault","vaultsecret":"vault-secrets-operator/kvv1-example-vaultsecret"}
{"level":"info","ts":1636700463.1403728,"logger":"vault","msg":"Read secret apps/dev/service01/dbcred"}
{"level":"info","ts":1636700463.3723762,"logger":"controllers.VaultSecret","msg":"Creating a new Secret","vaultsecret":"vault-secrets-operator/kvv1-example-vaultsecret","Secret.Namespace":"vault-secrets-operator","Secret.Name":"kvv1-example-vaultsecret"}
{"level":"info","ts":1636700463.8067257,"logger":"controllers.VaultSecret","msg":"Use shared client to get secret from Vault","vaultsecret":"vault-secrets-operator/kvv1-example-vaultsecret"}
{"level":"info","ts":1636700463.8068776,"logger":"vault","msg":"Read secret apps/dev/service01/dbcred"}
{"level":"info","ts":1636700463.8776162,"logger":"controllers.VaultSecret","msg":"Updating a Secret","vaultsecret":"vault-secrets-operator/kvv1-example-vaultsecret","Secret.Namespace":"vault-secrets-operator","Secret.Name":"kvv1-example-vaultsecret"}
{"level":"info","ts":1636700562.1115842,"logger":"vault","msg":"Renew Vault token"}
{"level":"info","ts":1636700712.1466122,"logger":"vault","msg":"Renew Vault token"}
{"level":"error","ts":1636700712.171864,"logger":"vault","msg":"Could not renew token","error":"Error making API request.\n\nURL: PUT https://<vault-url>/v1/auth/token/renew-self\nCode: 403. Errors:\n\n* permission denied"}
{"level":"info","ts":1636700742.173507,"logger":"vault","msg":"Renew Vault token"}
{"level":"error","ts":1636700742.228602,"logger":"vault","msg":"Could not renew token","error":"Error making API request.\n\nURL: PUT https://<vault-url>/v1/auth/token/renew-self\nCode: 403. Errors:\n\n* permission denied"}
vault-secrets-operator-fb8754dff-7vns4 vault-secrets-operator {"level":"info","ts":1636700772.229616,"logger":"vault","msg":"Renew Vault token"}
{"level":"error","ts":1636700772.2763042,"logger":"vault","msg":"Could not renew token","error":"Error making API request.\n\nURL: PUT https://<vault-url>/v1/auth/token/renew-self\nCode: 403. Errors:\n\n* permission denied"}

the policy looks like this

##### List every secrets under "apps/" #####
path "apps/*" {
  capabilities = ["list", "read"]
}

# Allow tokens to look up their own properties
path "auth/token/lookup-self" {
    capabilities = ["read"]
}

# Allow tokens to renew themselves
path "auth/token/renew-self" {
    capabilities = ["update"]
}

i have this in helm values

vault:
    address: "https://<vault-url>"
    authMethod: kubernetes
    kubernetesPath: auth/ocp
    kubernetesRole: vault-secrets-operator
    reconciliationTime: 0
    namespaces: ""

Inappropriate role binding created by helm chart when rbac.namespaced is true

We're trying to use vault-secrets-operator from within k8s namespaces.

Reading the values file indicated that we should be setting rbac.namespaced to true. See here

Unfortunately, this causes vault-secrets-operators' chart to create a RoleBinding resource rather than a ClusterRoleBinding resource. See here.

This is insufficient to allow vault to create a TokenReview resource, as TokenReview resources are not namespaced. Replacing the RoleBinding with a ClusterRoleBinding resolves the issue.

Could not create API client for Vault login: x509: certificate signed by unknown authority - can we support tls-skip-verify?

First of all, great tool! I was looking a way to create a vault secret that hashicopr vault injector retrieves as Kubernetes secret because one of our app does not let you source a file to set environment variable and only other way was to supply a kubernetes secret which i did not have the means to create until i found your tool!

I've deployed as per your readme but ran into the below:

{"level":"info","ts":1616150507.893286,"logger":"vault","msg":"Reconciliation is enabled.","ReconciliationTime":0}
{"level":"error","ts":1616150512.1441982,"msg":"Could not create API client for Vault","error":"Put https://vault.com.:443/v1/auth/kubernetes/my-cluster/login: x509: certificate signed by unknown authority","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nmain.main\n\t/workspace/main.go:56\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}

and indeed it is because our vault dev instance has a self signed certificate.
It would be great if you can do something like what hashicorp vault secret injector does by letting you tls-skip-verify:

        vault.hashicorp.com/tls-skip-verify: "true"

of course in prod, we will have a valid cert but i also then noticed there is no way to set the CA cert for the vault-secrets-operator to use so if we can support that as well, that would be awesome!

Automatic Reconciliation

If a secret is changed in vault, the change should be automatically reflected in the k8s secret. Reconciliation should be tune-able (e.g. every 600s)

It looks like this would be straight forward to implement for Vault K/V Version 2 by using the version. For Vault K/V Version 1 you might consider adding a hash of the secret to an annotation in the k8s secret to facilitate reconciliation.

  • For K/V Version 2:

    • If a specific version is defined in the VaultSecret then automatic reconciliation should be ignored
    • If no version is defined in the VaultSecret then the most recent version should be used reflected in the K8s secret
  • For K/V Version 1:

    • If the secret changes in vault then it should be reflected in the k8s secret.

Also, it would be awesome if k8s secrets were deleted when/if the vault secrets were deleted and the vault secret value was verified in the k8s secret to prevent manual edits.

Vault API token autorefresh?

Hello,
Does the operator automatically refreshes the Vault API token? What happens if the operator is down when the token expires?

Is it possible to use GCP service accounts (using workload identity) to auth against vault?

This is a question and not an issue.

I'm running vault-secrets-operator in GCP on multiple GKE clusters and vault only on one. With workload identity one can map kubernetes service accounts to a GCP service account.

This would reduce the complexity of managing cluster hosts and service account tokens with vault as an out of band task. Instead if we can auth using GCP service account then there will not be a need to wrangle cluster credentials.

Let me know if this is something that's already possible to do?

Thank You.

Access based on vault policies

Hello,

Is there a way we can have access control with the kubernetes auth method similar to how the vault agent injector has? For instance, it fetches the secret only if you/(serviceaccount) are authorized to read the secrets from a a given path in vault.

I see the option for using vaultRole but looking for a way for it to restrict using a certain role say "admin" which can get any secret from vault.

If that is not possible, is there a way we can have multiple instances of this operator configured with different roles/policies ?

Can Vault Roles be passed from the VaultSecret object itself

different roles are having different policies attached usually,
if there is an option to pass roles from the VaultSecret secret object itself, it would be more flexible I believe.

right now, we pass only one role per cluster or maybe more accurately per installation of vault-secrets-operator.

and attached policy having access to say kv/dev/*
which means any path below that can be mounted in secrets in that cluster. and pods can use that.

if we happen to pass roles, which has attached policy, and policy can be passed to the secret it would be easy to only mount those secrets which are required.

Token renewal failure time bomb 💣

I'm using token-based auth and am wondering whether there's a design issue with how tokens are renewed.

Whatever the value of VAULT_TOKEN_LEASE_DURATION, with each failure to renew the token, the initial "buffer" (of the recommended 24h) will get smaller and smaller until it hits a lower limit where there's only one chance (two if you're lucky) for renewal until the token expires:

func RenewToken() {
for {
log.Info("Renew Vault token")
_, err := client.Auth().Token().RenewSelf(tokenLeaseDuration)
if err != nil {
log.Error(err, "Could not renew token")
}
time.Sleep(time.Duration(float64(tokenLeaseDuration)*0.5) * time.Second)
}
}

Example for a token with initial TTL of 24h and VAULT_TOKEN_LEASE_DURATION set to 6h:

  • Sep 23 00:00:00 -- ttl=24h, renew fails
  • Sep 23 03:00:00 -- ttl=21h, renew fails
  • Sep 23 06:00:00 -- ttl=18h, renew fails
  • Sep 23 09:00:00 -- ttl=15h, renew fails
  • Sep 23 12:00:00 -- ttl=12h, renew fails
  • Sep 23 15:00:00 -- ttl=9h, renew fails
  • Sep 23 18:00:00 -- ttl=6h, renew fails
  • Sep 23 21:00:00 -- ttl=3h, renew fails
  • Sep 24 00:00:00 -- ttl=0h, renew succeeds --> new TTL is 6h
  • Sep 24 03:00:00 -- ttl=3h, renew fails
  • Sep 24 06:00:00 -- ttl=0h (let's hope we have some headroom here)

So in a sense, the initial validity of the token buys us an error budget. In this case, it would be 9 or 10 failures until the token could expire (depending on whether we run into a race condition if the token is "almost expired").

While it is an unlikely event, especially in a HA Vault setup, it's definitely possible (and more likely as time progresses) and actually pretty tragic that the sleep is that long even in case of an error. 😅 I'm wondering whether you'd be open to adding a VAULT_TOKEN_RENEWAL_INTERVAL variable that makes this configurable, or by reducing the sleep in case of errors.

Cosmetic changes to kustomize

Just wanted to suggest a couple of changes that may make it easier for you and more legible (and usable) for others, feel free to close and disregard if it doesn't feel natural to you.

In here you have this:

# Adds namespace to all resources.
namespace: vault-secrets-operator

bases:
- ../crd
- ../rbac
- ../manager

The bases field is deprecated

Also I am guessing your bases are more static than your version, so it may make more sense to set the version of the image to latest in the base, but then do this in the kustomization file (or do both):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Adds namespace to all resources.
namespace: vault-secrets-operator

resources:
- ../crd
- ../rbac
- ../manager

images:
  - name: ricoberger/vault-secrets-operator
    newTag: 1.11.0

Again, feel free to close and disregard if these changes don't suit you as it's mostly cosmetic. It's still amazing you are maintaining kustomize deployment so I can track it and this will work either way:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system

resources:
  - https://github.com/ricoberger/vault-secrets-operator/config/default?ref=1.11.0

patchesStrategicMerge:
- overlays/deploy.yaml

images:
  - name: ricoberger/vault-secrets-operator
    newTag: 1.11.0

KV2 secrets on nested path does not work

If the KV2 secrets engine is enabled under a nested path the operator does not work for these secrets. Steps to reproduce:

vault secrets enable -path=kv2/on/nested/path -version=2 kv

vault kv put kv2/on/nested/path/example-vaultsecret foo=bar

cat <<EOF | kubectl apply -f -
apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kv2-example-vaultsecret-1
spec:
  # Incorrect path.
  path: kv2/on/nested/path/example-vaultsecret
  secretEngine: kv2
  type: Opaque
EOF

cat <<EOF | kubectl apply -f -
apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: kv2-example-vaultsecret-2
spec:
  # Correct path with the data part.
  path: kv2/on/nested/path/data/example-vaultsecret
  secretEngine: kv2
  type: Opaque
EOF

The first example contains an invalid path, the operator would look for the secret at kv2/data/on/nested/path/example-vaultsecret, which is also not valid.

The second example contains the correct path with the data part in it, but the operator looks under the following path for the secret: kv2/data/on/nested/path/data/example-vaultsecret.

The operator only looks at the second part of the path. If this part is not data, data will be added at the second position. This behavior is incorrect, if the secret engine is enabled under a nested path.

Possible solutions:

  • Remove the path check at L169. This would break existing secrets, when they rely on this.
-	if secretEngine == "kv2" {
-		pathParts := strings.Split(path, "/")
-		if len(pathParts) < 2 {
-			return nil, ErrInvalidPath
-		}
-
-		if pathParts[1] != "data" {
-			path = pathParts[0] + "/data/" + strings.Join(pathParts[1:], "/")
-		}
-	}
  • Check if the the slice of the path parts contains a data part. If not, we add it on the second position. If yes, we do not modify the path. This would break CRs which contains data in there path on (e.g. kv2/secret/on/nested/path/data).
	if secretEngine == "kv2" {
		pathParts := strings.Split(path, "/")
		if len(pathParts) < 2 {
			return nil, ErrInvalidPath
		}

-		if pathParts[1] != "data" {
+		if !contains("data", pathParts) {
			path = pathParts[0] + "/data/" + strings.Join(pathParts[1:], "/")
		}
	}
  • Add a new field secretEnginePath to the CRD. Then the resulting path would be secretEnginePath + path.

Cannot renew token after a period of time using kubernetes auth

I'm experiencing a scenario using kubernetes auth where (for reasons I'm yet to understand) the token renewal is failing after a number of successful attempts

The logs are below (vault URL ommitted);

vault-secrets-operator 2021-05-12T05:26:45.586Z    INFO controller-runtime.manager starting metrics server {"path": "/metrics"}
vault-secrets-operator 2021-05-12T05:29:14.238Z    INFO vault Renew Vault token
vault-secrets-operator 2021-05-12T05:31:44.264Z    INFO vault Renew Vault token
vault-secrets-operator 2021-05-12T05:34:14.285Z    INFO vault Renew Vault token
vault-secrets-operator 2021-05-12T05:36:44.320Z    INFO vault Renew Vault token
vault-secrets-operator 2021-05-12T05:39:14.350Z    INFO vault Renew Vault token
vault-secrets-operator 2021-05-12T05:41:44.369Z    INFO vault Renew Vault token
vault-secrets-operator 2021-05-12T05:41:44.384Z    ERROR vault Could not renew token {"error": "Error making API request.\n\nURL: PUT <vault_url>:8200/v1/auth/token/renew-self\nCode: 403. Errors:\n\n* permission denied"}vault-secrets-operatorgithub.com/go-logr/zapr.(*zapLogger).Errorvault-secrets-operator    /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
vault-secrets-operator github.com/ricoberger/vault-secrets-operator/vault.(*Client).RenewTokenvault-secrets-operator    /workspace/vault/client.go:69
vault-secrets-operator 2021-05-12T05:41:44.385Z    INFO vault Token information: (*api.Secret)(nil) {"error": "Error making API request.\n\nURL: GET <vault_url>/v1/auth/token/lookup-self\nCode: 403. Errors:\n\n* permission denied"}

I have the following environment variables set;

VAULT_ADDRESS | <vault_url> |  
VAULT_APP_ROLE_PATH | auth/approle |  
VAULT_AUTH_METHOD | kubernetes |  
VAULT_KUBERNETES_PATH | auth/kubernetes |  
VAULT_KUBERNETES_ROLE | internal-app |  
VAULT_RECONCILIATION_TIME | 0 |  
VAULT_TOKEN_LEASE_DURATION | 300 |  
VAULT_TOKEN_MAX_TTL | 600 |  
VAULT_TOKEN_PATH |   |  
VAULT_TOKEN_RENEWAL_INTERVAL | 150 |  
VAULT_TOKEN_RENEWAL_RETRY_INTERVAL | 30 |  
WATCH_NAMESPACE

Lease & Max TTL are set as vault defaults currently. As my renewal is every 2.5 minutes, I'm seeing successful token renewal for 15 minutes then it starts failing.

Restarting the pod resolves the issue, but I cannot rely on this as it is a manual process.

Any assistance on this would be appreciated!

Some proposals I see to resolve the issue;

  • Have an environment variable which would force fresh token acquisition after a period of time
  • Have the pod health endpoint report unhealthy in this scenario to trigger a pod restart

Pod crashlooping when an apiservice is unavailable

This was actually an issue with the cert-manager webhook api service but when it occurs it sends the secrets operator into a crashloop. Helm added some additional error handling in a recent commit to handle when an api is unavailable. It might be preferable to apply a similar approach in the vault operator to avoid the pod crashing.

related to

cert-manager/cert-manager#2273
helm/helm#6361

Log

{"level":"info","ts":1574070345.8219388,"logger":"cmd","msg":"Could not generate and serve custom resource metrics","error":"discovering resource information failed for VaultSecret in ricoberger.de/v1alpha1: unable to retrieve the complete list of server APIs: webhook.cert-manager.io/v1beta1: the server is currently unable to handle the request"}
{"level":"info","ts":1574070349.1773176,"logger":"metrics","msg":"Metrics Service object updated","Service.Name":"vault-secrets-operator-metrics","Service.Namespace":"kube-system"}
{"level":"info","ts":1574070352.479337,"logger":"cmd","msg":"Could not create ServiceMonitor object","error":"unable to retrieve the complete list of server APIs: webhook.cert-manager.io/v1beta1: the server is currently unable to handle the request"}

Creating image pull secrets?

I'm just interested if anyone's got a good way to use vault-secrets-operator to generate Kubernetes-compatible Image Pull Secrets?

I realise that I could do this (I think) by creating a VaultSecret object referencing a path in Vault which stores a raw .dockerconfigjson style key (as shown in a previous issue), but I guess really I'd like the keys in the Vault secret to be stored with more useful keys (e.g docker-server,docker-password etc like you can do with kubectl create secret docker-registry).

Maybe I can just implement this with the templates proposed by @bartmeuris, but I feel like this is something that people must have hit before and had a way of sorting?

Not able to start with not renewable token

Trying to run my project locally with not renewable token.

vault-secrets-operator:
  environmentVars:
    - name: VAULT_TOKEN
      value: "VaultRootToken123"
    - name: VAULT_TOKEN_LEASE_DURATION
      value: "86400"

Having following error:

{"level":"info","ts":1617181954.3451219,"logger":"vault","msg":"Reconciliation is enabled.","ReconciliationTime":0}
{"level":"info","ts":1617181954.345637,"logger":"vault","msg":"Renew Vault token"}
{"level":"error","ts":1617181954.3811648,"logger":"vault","msg":"Could not renew token","error":"Error making API request.\n\nURL: PUT http://host.docker.internal:8200/v1/auth/token/renew-self\nCode: 400. Errors:\n\n* lease is not renewable","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\ngithub.com/ricoberger/vault-secrets-operator/vault.(*Client).RenewToken\n\t/workspace/vault/client.go:69"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xbebddd]

goroutine 12 [running]:
github.com/ricoberger/vault-secrets-operator/vault.(*Client).RenewToken(0xc0002e7110)
        /workspace/vault/client.go:72 +0x26d
created by main.main
        /workspace/main.go:60 +0x304

Can I somehow disable token renewing and work with a fixed token on Dev environment?

Error with K8s Auth Method

Hi @ricoberger

I am getting the following error in the operator

  1. Errors:\n\n* 1 error occurred:\n\t* error running lookahead function for mfa: could not parse UID from claims\n\n","stacktrace":"runtime.main\n\t/usr/local/go/src/runtime/proc.go:225"

I saw this hashicorp/vault-plugin-auth-kubernetes#107

It say something related to JWT token. I am configuing it via TF.

Kevin

No status in VaultSecret resource

@ricoberger Would it be possible to add a status section to the VaultSecret resource when described? Status field could show one of the following:

  • Pending
  • Success
  • Failed
  • Unknown

Failed would mean that the k8s secret failed to be created.

This can be added to VaultSecretStatus struct.

kubectl -n NAMESPACE describe vaultsecret.ricoberger.de testing
Name:         testing
Namespace:    NAMESPACE
Labels:       app.kubernetes.io/instance=foo
-->Status: Success
Annotations:  <none>
API Version:  ricoberger.de/v1alpha1
Kind:         VaultSecret
Metadata:
  Creation Timestamp:  2021-02-08T22:10:28Z
  Generation:          2
  Resource Version:    1234567890
  Self Link:           /apis/ricoberger.de/v1alpha1/namespaces/NAMESPACE/vaultsecrets/testing
  UID:                 1234556890-123908-1293081203-12312098
Spec:
  Path:             kv/foo/bar/test
  Type:             Opaque
  Vault Namespace:  VAULT_NAMESPACE
Events:             <none>

I can work on submitting a PR for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.