Git Product home page Git Product logo

pomerium-helm's Introduction

pomerium logo

Go Report Card GoDoc LICENSE Docker Pulls

Pomerium builds secure, clientless connections to internal web apps and services without a corporate VPN.

Pomerium is:

  • Easier because you don’t have to maintain a client or software.
  • Faster because it’s deployed directly where your apps and services are. No more expensive data backhauling.
  • Safer because every single action is verified for trusted identity, device, and context.

It’s not a VPN alternative – it’s the trusted, foolproof way to protect your business.

Docs

For comprehensive docs, and tutorials see our documentation.

Contributing

See Contributing for information on how you can contribute to Pomerium.

pomerium-helm's People

Contributors

adamantal avatar adinhodovic avatar adriannemo avatar bazzuka avatar bonifaido avatar bshifter avatar calebdoxsey avatar davidrosson avatar desimone avatar highwatersdev avatar hugocortes avatar mikhailadvani avatar mohsen0 avatar psychomelet avatar renovate[bot] avatar rguichard avatar robertgates55 avatar rspier avatar sherifkayad avatar shreyaskarnik avatar spencergilbert avatar tarokkk avatar tdorsey avatar tgomas avatar thomasehling avatar tobiaskohlbau avatar travisgroth avatar vad1mo avatar victornoel avatar wasaga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pomerium-helm's Issues

Supporting insecure_server

Hi

When the SSL offloading is done via the load balancer, helm deployment should not configure the service with a TLS certificate.

Are you open to idea of adding Values.insecure_server: True/False to support that in the helm chart?

Enabling the operator causes YAML to JSON errors

Setting Helm values to

operator:
  enabled: true

causes the following error

Error: YAML parse error on pomerium/templates/serviceaccount.yaml: error converting YAML to JSON: yaml: line 13: could not find expected ':'

allow setting pre-existing idp.serviceAccount

@desimone this prevents the use of a service account when rolling my own secret containing the service account.

As a workaround, I can just set authenticate.idp.serviceAccount to true though. I'm not sure how to fix this so we may need to leave it like this but simply document the fact that if needed, it can be set to true when using existingSecret.

Originally posted by @victornoel in #5

can't install chart with generateTLS "nil" in Secret.data.tls.crt

#helm config
config.generateTLS: false
config.insecure: true
forwardAuth.enabled: true

[unknown object type "nil" in Secret.data.tls.crt, unknown object type "nil" in Secret.data.tls.key

# Source: pomerium/templates/tls-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: pomerium-tls
  labels:
    app.kubernetes.io/name: pomerium
    helm.sh/chart: pomerium-8.4.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: pomerium
type: kubernetes.io/tls
data:
  tls.crt: 
  tls.key:

I guess the error comes from not setting generateTLS

How could I do it if I want to set the Cert via Cert-manager if this is the cert for the public ingress.

Improved generated SSL cert handling

Follow up from #51

  • Improve if-not-exists logic to not show false diffs
  • Declarative regeneration
  • cert-manager integration
  • garbage collection of generated certs

Helm lint --strict fails

Hello guys,

So, I was playing with Pomerium using the helm charts. I change the configuration to fit my deployment needs, but when I tried to deploy the chart it failed.

I have a helm lint --strict stage in my pipeline which failed the deployment.

➜  pomerium helm lint --strict pomerium                                                                  
==> Linting pomerium
[ERROR] templates/: render error in "pomerium/templates/tls-secrets.yaml": template: pomerium/templates/_helpers.tpl:5:31: executing "pomerium.name" at <.Values.nameOverride>: map has no entry for key "nameOverride"

Error: 1 chart(s) linted, 1 chart(s) failed

➜  pomerium helm lint --strict pomerium                                                                  
==> Linting pomerium
[ERROR] templates/: render error in "pomerium/templates/secret.yaml": template: pomerium/templates/secret.yaml:1:18: executing "pomerium/templates/secret.yaml" at <.Values.config.exist...>: map has no entry for key "existingSecret"

Error: 1 chart(s) linted, 1 chart(s) failed

Some of these configurations are not present in the default values.yaml file, nor are they present in the Readme

(feat): Do not set authorize tls host when forwardAuth is enabled

Authorize tls host ( https://github.com/pomerium/pomerium-helm/blob/master/charts/pomerium/templates/ingress.yaml#L19 ) in ingress should not be set when forwardAuth.enabled=true.

spec:
  tls:
    - secretName: secret.pomerium.com
      hosts:
        - "authorize.pomerium.com" # ingress does not even contain authorize.pomerium.com host
        - "authenticate.pomerium.com"

Problem:
If You are using external-dns, it will create authorize CNAME record, which is pointless operation.

IDP scopes written in config as string instead of list

What happened?

The operator was failing to start up due to being unable to read generated config file.

Error: failed to set base config from /etc/pomerium/config.yaml: failed to unmarshal configuration: yaml: unmarshal errors:
  line 10: cannot unmarshal !!str `openid,...` into []string

What did you expect to happen?

The generated config file would be valid and the operator would start up cleanly

Steps to reproduce

  1. Set multiple IDP scopes in your values.yaml

    authenticate:
      idp:
        scopes: "openid,offline_access,profile,email,groups"
    
  2. Deploy / upgrade using this values file

  3. The generated config will include an invalid idp_scopes and the operator will fail

What's your environment like?

  • Chart version: 12.1.0
  • Container image: v0.10.0
  • Kubernetes version: v1.17.9
  • Cloud provider: aws
  • Other details: IDP - OKTA

What are your chart values?

Only Posting the relevant authenticate section

authenticate:
  existingTLSSecret: pomerium-authenticate-tls
  idp:
    provider: okta
    url: "https://***"
    scopes: "openid,offline_access,profile,email,groups"

What are the contents of your config secret?

Showing pomerium-base as the operator did not render the main config
Only showing relevant section

idp_provider: okta
idp_scopes: openid,offline_access,profile,email,groups

What did you see in the logs?

Operator logs:

Error: failed to set base config from /etc/pomerium/config.yaml: failed to unmarshal configuration: yaml: unmarshal errors:
  line 10: cannot unmarshal !!str `openid,...` into []string

Installation help

Hello,
I'm a student at Carnegie Mellon University. I'm trying to use pomerium to model zero-trust behavior but I am finding it difficult to follow the instructions. Is there a set of instructions I can use to install and run pomerium on my ubuntu machine?

ingress.secret should be optional if there is already one set up in k8s

Hi,
I have a cert-manager set up already in my k8s cluster, so what I need do for pomerium is to specify the tls secret name. I don't have to specify the key and cert in this scenario.

btw, does pomerium reload or restart if the certificate is refreshed? Because the certificates from letsencrypt need refresh every 3 months.

improper indentation for resources

Replication steps:

helm install $HOME/pomerium-helm \
	--set service.type="NodePort" \
	--set ingress.secret.name="pomerium-tls" \
	--set ingress.secret.cert=$(base64 -i "$HOME/.acme.sh/*.corp.beyondperimeter.com_ecc/fullchain.cer") \
	--set ingress.secret.key=$(base64 -i "$HOME/.acme.sh/*.corp.beyondperimeter.com_ecc/*.corp.beyondperimeter.com.key") \
	--set config.policy="$(base64 -i docs/docs/examples/config/policy.example.yaml)" \
	--set authenticate.idp.provider="google" \
	--set authenticate.idp.clientID="XXXX" \
	--set authenticate.idp.clientSecret="XXXXX" \
	--set-string ingress.annotations."kubernetes\.io/ingress\.allow-http"=false \
	--set ingress.annotations."cloud\.google\.com/app-protocols"=\"{\"https\":\"HTTPS\"}\"

Error:

Error: YAML parse error on pomerium/templates/authenticate-deployment.yaml: error converting YAML to JSON: yaml: line 105: did not find expected key

Expected:

Working helm deployment

Environment: Google GKE

config.policy should support multi-line templates

What happened?

I defined the following in my values.yaml file:

config:
  policy: |
      {{- range .Values.services }}
        - from: https://{{ .from }}
          to: {{ .to }}
      {{ toYaml .policy | indent 4 }}
      {{- end }}

services contains some home made structure to declare the various elements of the policy.

pomerium.config.dynamic is calling {{ tpl (toYaml .Values.config.policy) . | indent 2 }} and thus the resulting policy is:

policy:
  |
  - from: https://xxxxx
    to: yyyyy
    etc…

As you can see, the | stayed because of the toYaml I believe.

What did you expect to happen?

That the content of the config.policy is used directly as a template and as yaml.

I fixed the problem just by removing the toYaml operator in the _helpers.tpl file but I'm not sure it is not going to introduce regressions for other requirements :)

What did you see in the logs?

Pomerium failed to parse the policy because it is expecting an array.

Unable to change log_level for the proxy

Hey, I tried the following values :

config:
  proxy_log_level: info
proxy:
  config:
    proxy_log_level: info 

With no luck. How can we set the proxy log level ?

Thanks for your help.

Forwardauth Internal: route unknown

When using the forwardauth with internal: true - forwardauth.internal=true - I get HTTP 500 responses from nginx ingress controller

> GET / HTTP/2
> Host: echo.tools.corp.com
> User-Agent: curl/7.64.1
> Accept: */*
> 
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 500 
< date: Thu, 30 Apr 2020 22:00:53 GMT
< content-type: text/html
< content-length: 183
< strict-transport-security: max-age=15724800; includeSubDomains
< 
pomerium-proxy-56dbc8f54-gmqd9 pomerium {"level":"info","X-Forwarded-For":["78.128.88.100"],"X-Original-Method":["GET"],"X-Original-Url":["https://echo.tools.corp.com/"],"X-Real-Ip":["78.128.88.100"],"X-Sent-From":["nginx-ingress-controller"],"ip":"100.127.182.205","user_agent":"curl/7.64.1","req_id":"ceb76e06-05c1-6426-5c2b-bcfb6a17bab6","error":"Not Found: pomerium-proxy.pomerium.svc.cluster.local route unknown","time":"2020-04-30T22:02:30Z","message":"httputil: ErrorResponse"}
pomerium-proxy-56dbc8f54-gmqd9 pomerium {"level":"debug","X-Forwarded-For":["78.128.88.100"],"X-Original-Method":["GET"],"X-Original-Url":["https://echo.tools.corp.com/"],"X-Real-Ip":["78.128.88.100"],"X-Sent-From":["nginx-ingress-controller"],"ip":"100.127.182.205","user_agent":"curl/7.64.1","req_id":"ceb76e06-05c1-6426-5c2b-bcfb6a17bab6","duration":0.24965,"size":1428,"status":404,"method":"GET","service":"proxy","host":"pomerium-proxy.pomerium.svc.cluster.local","path":"/verify?uri=https://echo.tools.corp.com/","time":"2020-04-30T22:02:30Z","message":"http-request"}

Insecure is still using port 443 and https naming convention

#83 introduced insecure.

Unfortunately it is incomplete. For example when using together with Traefik, then Traefik is guessing based on port number 443 and the port (name?) if the service should called plain or over tls.

The setups with HTTP traffic over port 443 confuses Traefik as it assumes by port number & port name that the upstream service is secure which it isn't.

If insecure=true change Ports from 443 to 80 and rename port from https to http.

remove unused authorize service url

Authorize service url is not used in any template code that I can see, and instead uses the internal url. Authorize no longer has an external url, so it probably makes sense to either remove it, or use it in place of authorizeInternalUrl.

charts/pomerium/README.md
121:| `proxy.authorizeServiceUrl`         | The externally accessible url for the authorize service.                                                                                                                                                      | `https://{{authorize.name}}.{{config.rootDomain}}`                                    |

charts/pomerium/values.yaml
92:  authorizeServiceUrl: ""

charts/pomerium/templates/NOTES.txt
50:        --set proxy.authorizeServiceUrl="https://access.corp.pomerium.io"

Error shown in Helm output when environment variables are used for identity provider credentials

Overview

I use environment variables passed via extraEnv to set the identity provider (GitLab) credentials: IDP_CLIENT_ID and IDP_CLIENT_SECRET. The reason for using environment variables rather than authenticate.idp.{clientID, clientSecret} is that my credential values are stored in Vault, and are accessed by the Pods dynamically using the bank-vaults webhook mechanism:

extraEnv:
  IDP_CLIENT_ID: vault:ci/data/sandbox/pomerium#IDP_CLIENT_ID
  IDP_CLIENT_SECRET: vault:ci/data/sandbox/pomerium#IDP_CLIENT_SECRET

The functionality works fine, but the Helm chart outputs an error message after deploying the chart, implying that something is wrong, when there isn't actually anything wrong.

Mentioning my use of bank-vaults-style value references is only relevant for the purpose of explaining my need for using environment variables; this also applies to hard-coded values set in the extraEnv block.

Actual Behavior

  1. I deploy the Helm chart with identity provider credentials defined in environment variables.
  2. The Pods spin up successfully.
  3. Pomerium successfully provides Forward Auth functionality for my ingress.
  4. The helm install command exits cleanly, but outputs the following error message:
    NOTES:
    ##############################################################################
    ####        ERROR: You did not set a valid identity provider              ####
    ##############################################################################
    
    This deployment will be incomplete until you configure a valid identity provider.
    
    Currently supported providers:
    
        - Okta
        - Google
        - Azure Active Directory
        - OneLogin
    
    See the values.yaml file to see what values are required for each provider.
    
    If you are having trouble with the configuration of a provider please visit
    the official documentation:
    
        https://www.pomerium.io/docs/identity-providers.html
    

The only way to avoid this error is to define "dummy" authenticate.idp.{clientID, clientSecret} values, and rely on the hierarchical priority given to the real credential values set by environment variables:

authenticate:
  idp:
    provider: gitlab
    clientID: DEFINED_AS_ENV_VAR
    clientSecret: DEFINED_AS_ENV_VAR

extraEnv:
  IDP_CLIENT_ID: <vault specification key, or hardcoded credential value>
  IDP_CLIENT_SECRET: <vault specification key, or hardcoded credential value>

Expected Behavior

The Helm chart should allow me to use environment variables instead of values.yaml configuration keys for specifying identity provider credentials, without implying that there is an error in my configuration.

Steps to Reproduce the Problem

  1. Use the Helm chart to deploy a working implementation of Pomerium, defining the identity provider credentials via authenticate.idp.{clientID, clientSecret} in the values.yaml file:
    authenticate:
      idp:
        provider: gitlab
        clientID: aaaabbbbccccdddd1111222233334444
        clientSecret: ddddccccbbbbaaaa4444333322221111
    
  2. Notice that no error message is generated.
  3. Use the Helm chart to deploy the same implementation of Pomerium, but define the identity provider credentials via environment variables instead:
authenticate:
  idp:
    provider: gitlab

extraEnv:
  IDP_CLIENT_ID: aaaabbbbccccdddd1111222233334444
  IDP_CLIENT_SECRET: ddddccccbbbbaaaa4444333322221111
  1. Notice that an error message is displayed in the helm install output.

Specifications

  • Chart version: v10.0.1
  • Pomerium version: v0.9.2
  • K8s version: v1.15.11

Rectification

Modify "pomerium.providerOK" in _helpers.tpl to also check for identity provider credentials having been set via environment variables.

Add redis chart dependency and settings

discussed with @bjoernw

Rough requirements:

  • Work correctly in insecure mode
  • Use TLS under normal circumstances
  • Possibly generate TLS certs automatically
  • Use banzai redis upstream non-cluster, non-sentinal mode (we don't support that yet)
  • Only install redis chart if requested by the user explicitly

Allow injecting specific environment variables into only the 'authenticate' Pods

Overview

I use environment variables passed via the existing extraEnv capability to set the identity provider (GitLab) credentials: IDP_CLIENT_ID and IDP_CLIENT_SECRET. The reason for using environment variables rather than authenticate.idp.{clientID, clientSecret} is that my credential values are stored in Vault, and are accessed by the Pods dynamically using the bank-vaults webhook mechanism:

extraEnv:
  IDP_CLIENT_ID: vault:ci/data/sandbox/pomerium#IDP_CLIENT_ID
  IDP_CLIENT_SECRET: vault:ci/data/sandbox/pomerium#IDP_CLIENT_SECRET

This functionality currently works fine. However, the only way to set environment variables in Pods is to use this "global" extraEnv capability, which injects the defined environment variables into all components' Pods (authenticate, authorize, cache, proxy). Since the credentials are only needed by the authenticate Pods, it would be useful to be able to set these environment variables only for those Pods that require them, e.g.:

authenticate:
  deployment:
    extraEnv:
      IDP_CLIENT_ID: vault:ci/data/sandbox/pomerium#IDP_CLIENT_ID
      IDP_CLIENT_SECRET: vault:ci/data/sandbox/pomerium#IDP_CLIENT_SECRET

Extending my use of environment variables for credential injection, it would be nice if the Helm deployment could generate an optional Kubernetes ServiceAccount to be used by the authenticate deployment (only if RBAC is enabled). This would allow scoping access to Vault secrets via a policy/auth configuration bound to the generated ServiceAccount.

service annotations

Please modify your helm chart to allow distinct annotations for each service and deployment.

There are several use cases where being able to provide different annotations is useful, mainly having to do with secrets:

  • reloader
  • gloo gateway

Chart doesn't support operator with existing config

I'm trying to use the chart with the following configuration:

config:
  existingConfig: "config"
operator:
  enabled: true

Seems that there are 2 problems:

  1. existing config is mounted to the worker (authorize/proxy/etc.) pods instead of the new one generated by the operator
  2. new empty config is not created during the chart installation

policy and policyFile

Its not clear how adding policy is working, especially if it should be base64 encodes. (hint it's not)

Failed to render chart: exit status 1: Error: template: pomerium/templates/ingress.yaml:23:23: executing "pomerium/templates/ingress.yaml" at <.Values.config.policy>: range can't iterate over cG9saWN5OgogIC0gZ

also policyFile is referenced in the docs but not implemented/used anywhere in the chart.

forceGenerateTLS does not work for an upgrade

When doing an upgrade, if I set the config.forceGenerateTLS value to true, I get the following error:

Error: UPGRADE FAILED: secrets "pomerium-authenticate-tls" already exists

I suppose a workaround is to delete manually the secret beforehand, but we should:

  • at least document it
  • at best provide a safer upgrade path

I'm not exactly clear how to do that though.

I think the problem is maybe that when config.forceGenerateTLS is set to false, then maybe the resource should be generated so that helm continue to keep track of it? I'm not sure…

Erroneous Note

When I install pomerium with a preexisting secret, I should not see this:

NOTES:
##############################################################################
####        ERROR: You did not set a valid identity provider              ####
##############################################################################

This deployment will be incomplete until you configure a valid identity provider.

Currently supported providers:

    - Okta
    - Google
    - Azure Active Directory
    - OneLogin

See the values.yaml file to see what values are required for each provider.

If you are having trouble with the configuration of a provider please visit
the official documentation:

    https://www.pomerium.io/docs/identity-providers.html

Option to not expose the forwardAuth over ingress/for internal cluster use only

There should be an option to not expose forwardAuth over the ingress and only use the service (pomerium-proxy).

Background is:
pomerium/pomerium#549 (comment)
pomerium/pomerium#616 (comment)

If forwardAuth is exposed the config is set to forward_auth_url: forwardauth.${rootDomain} which means the service name (pomerium-proxy.svc) can't be used in forwardAuth mode. They need to match with the config somehow.

Therefore it should be possible to set the forward_auth_url to pomerium-proxy.svc.
It is not possible to just set forwardAuth.nameOverride=pomerium-proxy.svc because then it becomes impossible to issue lets encrypt certs for that domain if ingress is created.

I think the best option would be to just not create an ingress and make forwardAuth only be used internally.

Setting SIGNING_KEY in the secret

Hello,
we use SIGNING_KEY setting to enable the JWT token in response header in order to get additional user informations that are not available in other headers.

The chart has no parameter for that SIGNING_KEY, therefore we have to give it through extraEnv parameter which is not secure since it is not encoded in our configuration file.

It would be great to add a proxy.signingKey parameter to the chart and store it in the secret, allowing to give it through an existing secret and therefore not expose it in the configuration file.

'extraVolumes' value breaks Pomerium chart

Description:
The chart provides an ability to define an additional volumes by extraVolumes parameter. But if you set the parameter then helm throws the following error:
Error: YAML parse error on pomerium/templates/authenticate-deployment.yaml: error converting YAML to JSON: yaml: line 75: mapping values are not allowed in this context

Steps to reproduce:

  1. Create a file test.yaml with the following content:
extraVolumes:
    - name: foo
      configMap:
        name: bar
        optional: false
  1. Execute command: helm install --dry-run -f test.yaml pomerium charts/pomerium/

helm message is misleading if ingress not set

If an ingress is not specified in values.yaml, the helm output displays a misleading error about an identity provider not being set rather than a missing ingress. The output also does not have a --set ingress.enabled sample option, which would also silently resolve this issue for anyone copying the output to resolve the issue

I think the fix here is just another else branch in NOTES.txt when the ingress is not set correctly

Cannot use dynamically generated configs anymore

Is your feature request related to a problem? Please describe.

Since the helm chart v8.0.0, existingSecret and existingConfig were merged to use only existingSecret.

We use the pomerium chart as a dependency of our own chart that:

  • generate the config dynamically using helm
  • store the secret used by pomerium (idp secrets, cookie secret, etc) in git using sealed-secrets.

This means that if we want to store our IdP secrets in git (which we want), we also must seal the config using sealed-secrets, which is contrary to the idea of generating dynamically the configuration.

Describe the solution you'd like

Re-separate secrets and config :)

Describe alternatives you've considered

Ideas that do not work

  • using sealed-secrets template to inject non-sealed keys into the secret

Ideas that may work:

Needs more documentation on behaviors for new users

Encountered some problems when using values.yaml with configmap, I'll list some problems.

I followed the doc with 2 steps:

  1. created config.yaml and add it to configmap
  2. helm install with existingConfig set to configmap value.

Now the problems:

sometimes values.yaml config overwrite configmap

  1. authenticate.idp settings was in deployment env and overwrite the config.yaml values. So we can't put the idp config inside config.yaml.
  2. no scopes support in authenticate.idp which I submit an PR for it, while AWS Cognito need idp.scopes to be set. With 1 in mind, you can't set it in the configmap, cause if IDP_* found in the environment, the idp_scopes settings in configmap.yaml somehow ignored.
  3. same with shared_secret and cookie_secret

sometimes the values.yaml config won't overwrite configmap

  1. policy settings configured in values.yaml will not overwrite configmap when existingConfig set, it does nothing.
  2. but if we add policy in the config.yaml, then the ingress route will not be configured.

tweaks essential but commented out... these really needs documentation:

  1. I find nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" is essential to my k8s deployment on Aliyun Cloud, have not tested other. Typically cloud provider terminates the https after load balancer, it's essential to be enabled.
  2. The nginx.ingress.kubernetes.io/proxy-buffer-size: "16k" is essential to my deployment, have not tested other. The Azure AD callback url is too large for default size. If the setting not enabled, it just hang there.

Provide the possibility to store sensitive headers in secrets

Hello.

policy allows to set custom headers via the set_request_headers section.
Headers can store sensitive data like bearer token for the Kubernetes dashboard.

The chart stores the policy in the configmap. k8s secret looks like a safer option.

It would be great if we could store policy or just sensitive bits of a policy in a secret.

I can a PR if you don't mind.

Thanks in advance.

Simplify setting the basic auth header via values

We're using the homegrown deployment system with very limited templating capabilities (basically it can only substitute the placeholders with the values from the secure storage during the Helm chart installation).
The username and password used in the basic auth for the proxied applications are stored separately and our deployment system doesn't support any dependencies between them. We can add new values with the base64 representation but it's error-prone because we have a lot of k8s clusters with these applications.
The only option I've found is to use the existingConfig, use your chart as a subchart and generate the config with the base64 representation in our chart but it potentially needs a lot of support in a long term perspective (we need to reflect all the changes made to the config).
Maybe it's possible to add a new optional section to the policy values in the chart like this which will be converted to the set_request_headers format?

- from: https://app.example
  to: http://app.svc.cluster.local
  basicAuth:
    username: ""
    password: ""

I understand that it's a rather specific case but maybe it would be helpful for someone else.
I can submit a PR for this feature if you don't mind.

Authorize signing key missing

Chart: pomerium-8.5.1
AppVersion: 0.7.5

I'm using forwardauth with the nginx ingress controller and trying to have my backend service validate the JWT assertion after getting a request authenticated by pomerium. I am using a self generated EC key that we deploy alongside pomerium so I am sure the public key I am using is legit.

Looking at the authorize deployment, I don't see SIGNING_KEY set. If I edit the deployment and add my signing key then all is well as expected.

env:
- name: SERVICES
value: authorize
- name: CERTIFICATE_FILE
value: "/pomerium/cert.pem"
- name: CERTIFICATE_KEY_FILE
value: "/pomerium/privkey.pem"
- name: CERTIFICATE_AUTHORITY_FILE
value: "/pomerium/ca.pem"
{{- if .Values.config.insecure }}
- name: INSECURE_SERVER
value: "true"
- name: GRPC_INSECURE
value: "true"
{{- end }}
{{- range $name, $value := .Values.extraEnv }}
- name: {{ $name }}
value: {{ quote $value }}
{{- end }}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.