travisghansen / external-auth-server Goto Github PK
View Code? Open in Web Editor NEWeasy auth for reverse proxies
License: MIT License
easy auth for reverse proxies
License: MIT License
Traefik uses Consul KV store to enable clustering/HA. It makes sense to have eas also support Consul. This adds simplicity and reduces overhead on Kubernetes cluster
Do you think it would be possible to integrate an app secured by eas as a SAML app into Googles GSuite?
Here's a start page for the Google SAML side:
https://support.google.com/a/answer/6087519?hl=en
I fiddled around (without knowing what SAML is) and managed to add my app-icon to the google application launcher menu. (But to be honest: my app is not an app to be offered on the Marketplace. Otherwise, we would have done this in the first place. It would just be cool to have a working icon in the launcher.)
When I click it, Google sends a post with form-data to mydomain which my app/eas don't handle. Maybe they could? Would be extremely cool.
Here's a screenshot of the network tab:
As discussed offline we would really like to use server side tokens and in multiple clusters and applications. Therefore each cluster has unique secrets used by the external auth server. Right now each protected application in each cluster has a completely different auth url. This makes it quite difficult to manage on a large scale.
I would like to have a human readable reusable auth url (where only the base url/domain is different).
Right now the auth url looks like this:
https://eas.${base_domain}/verify?config_token=${config_token_alias_encrypted_uri}
I would like to use the auth url liks this:
https://eas.${base_domain}/verify?config_token_id=team_github_operators&config_token_store_id=primary
https://eas.${base_domain}/verify?config_token_id=team_github_customer
https://eas.${base_domain}/verify?config_token_id=team_github_developers
The config_token_store_id should be optional, primary should be the default value there.
Hi!
Is there a way to put the session expiry into a HTTP-Header?
I'm interested in evaluating eas
for usage with Ambassador API Gateway via its AuthService construct. See below for the auth flow we are envisioning for our public users. We have another flow that we're planning to use for internal apps, but I think this would be a better place to start.
X-Auth-Token
header with API requests.eas
.eas
validates the token with Firebase.Please let me know where to begin and I'm happy to document the steps taken to contribute back to the project.
Hi, Travis, it could totally be an issue on our side with some configuration, but perhaps you could validate our assumptions or spot some bug.
In EAS, we see a number of errors like this (wondering is it normal the decoded token is null, probably not):
{"service":"external-auth-server","level":"debug","message":"refresh_token \"ZwrKulkHValgKfaAl3NpsPUKu6GNoC-Zv8LE\""}
{"service":"external-auth-server","level":"debug","message":"refresh_token decoded null"}
Also, later on, we get this message:
{"message":"tokenSet not refreshed externally","level":"warn","service":"external-auth-server"}
And, finally:
{"code":"ETIMEDOUT","connect":true,"level":"error","service":"external-auth-server","message":"ETIMEDOUT","stack":"Error: ETIMEDOUT\n at Timeout._onTimeout (/home/eas/app/node_modules/request/request.js:849:19)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)"}
{"service":"external-auth-server","level":"info","message":"end verify pipeline with status: 401"}
So, initial assumption that we have intermittent connectivity issues to the identity server, so we get a timeout and then 401 from EAS. But those messages with refresh tokens are troubling as well. Would you have any insight/ideas?
And could you confirm that timeout is related to the OIDC provider? Based on this
and this line//TODO: better logic here to detect invalid_grant, etc
I guess a bit of info is missing to figure out why the error is happening even though I see you are already doing some specific error checks hereexternal-auth-server/src/plugin/oauth/index.js
Line 1105 in 14f0bc0
And, looking through some issues with refresh_tokens that we could see from our identity server logs, I've found this potential solution where token usage is OneTimeOnly
versus our current setting or ReUse
:
Hope that's not too much info, maybe it's just a timeout issue and everything else is irrelevant.
Hi there ,
Such a nice project , something i was looking for a long long time.
I'm trying to handle Basic auth in traefik reverse proxy and forward auth to LDAP AD via your services
Can you please , give me some tips . some help to configure all the stuff ?
I want to use it in a single docker env
dynamicly forward auth via label https://docs.traefik.io/v2.0/middlewares/forwardauth/
Try to allow or not by LDAP valid user , or group mapping .
Thanks you !
Hello,
I am using iidc plugin and my configuration for NGINX is similar to:
location / {
auth_request /auth;
auth_request_set $saved_set_cookie $upstream_http_set_cookie;
auth_request_set $auth_redirect $upstream_http_location;
auth_request_set $auth_id_token $upstream_http_x_id_token;
auth_request_set $auth_userinfo $upstream_http_x_userinfo;
auth_request_set $auth_access_token $upstream_http_x_access_token;
error_page 401 = @error401;
# note headers will NOT show up if variable value is blank
proxy_set_header X-Id-Token $auth_id_token;
proxy_set_header X-Userinfo $auth_userinfo;
proxy_set_header X-Access-Token $auth_access_token;
proxy_pass https://httpbin.org;
}
location /auth {
internal;
# $scheme$request_method$host$request_uri"
# localhost:8000
proxy_set_header X-Forwarded-Host $http_host;
# GET
proxy_set_header X-Forwarded-Method $request_method;
# http
proxy_set_header X-Forwarded-Proto $scheme;
# /anything?foo=bar
proxy_set_header X-Forwarded-Uri $request_uri;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_pass "http://127.0.0.1:8080/verify?redirect_http_code=401&config_token=token";
}
location @error401 {
if ($auth_redirect) {
add_header Set-Cookie $saved_set_cookie;
return 302 $auth_redirect;
}
}
}
But I have the same problems as #23
Specifically X-Forwarded-Uri
is not present in "verify request details" silly log even though in my configuration I have proxy_set_header X-Forwarded-Uri $request_uri;
Have you any suggestion?
I really like the concept of eas and how easy it makes to integrate security for different k8s apps that we will run. So I'm trying to set it up.
I'm using GKS with traefik as my ingress handler. I've been successful in setting up my apps, traefik and eas using basic auth as a first test. This setup works fine.
The final step that I want to take is to add keycloak to the setup. So I have also deployed keycloak into my GKS cluster. I've configured a realm and a client and created config_token.
When I now test the token I get a 503 error back from eas. Looking in the log I see this error:
code: "UNABLE_TO_VERIFY_LEAF_SIGNATURE"
level: "error"
message: "unable to verify the first certificate"
service: "external-auth-server"
stack: "Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1055:34)
at TLSSocket.emit (events.js:198:13)
at TLSSocket._finishInit (_tls_wrap.js:633:8)"
traefik is configured with a TLS wildcard cert for the domain I'm using. The cert works fine in Firefox and Chrome (it a cert issued by godaddy). Both, the apps, eas and keycloak are using a domain covered by the wildcard cert.
What could be going wrong here? Do I need to give a CA to eas, but I don't see how to configure it.
I'm trying to wrap my head around what the exact flow with eas would look like, once fully configured with something like GitHub or one of the Google auth providers. Is there an end-to-end demo deployment hosted anywhere?
Hi @travisghansen,
I'm trying to get eas working in the following architecture (each one are containers):
Here my eas configuration:
let config_token = {
eas: {
plugins: [
{
type: "oidc",
issuer: {
discover_url: "https://auth.example.com/auth/realms/demo/.well-known/openid-configuration"
},
client: {
client_id: 'eas',
client_secret: 'xxxx',
},
scopes: ['openid', 'email', 'profile'],
custom_authorization_paremeters: {},
redirect_uri: 'https://eas.example.com/oauth/callback',
features: {
cookie_expiry: false,
userinfo_query: true,
session_expiry: true,
session_expiry_refresh_windows: 86400,
session_retain_id: true,
authorization_token: 'access_token',
fetch_userinfo: true,
},
assertions: {
exp: true,
nbf: true,
iss: true,
userinfo: []
},
xhr: {},
cookie: {},
custom_error_headers: {},
custom_service_headers: {},
}
]
}
}
The demo backend is declared with following labels in traefik:
- "traefik.enable=true"
- "traefik.http.routers.demo.rule=Host(`demo.example.com`)"
- "traefik.http.routers.demo.entryPoints=https"
- "traefik.http.routers.demo.tls=true"
- "traefik.http.routers.demo.tls.certResolver=letsencrypt"
- "traefik.http.routers.demo.middlewares=eas-gitlab@file"
- "traefik.http.services.demo.loadbalancer.server.port=8000"
- "traefik.http.services.demo.loadbalancer.server.scheme=http"
And the middleware:
http:
middlewares:
eas-gitlab:
forwardAuth:
trustForwardHeader: true
address: "https://eas.example.com/verify?config_token=xxx"
Once I'm logged in keycloak, I'm redirect to eas.example.com
, and the get a response with following header:
location: https://eas.example.comundefined/?__eas_oauth_handler__=authorization_callback&code=xxx&session_state=yy&state=xxx
I first had a look at #23 but I'm must confess I'm not sure to understand the source issue. As far as I can tell, all users requests received by eas include X-Forward headers (but not in traefik requests to /verify
off course):
X-Forwarded-For: xxxx
X-Forwarded-Host: eas.example.com
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: b1dbda6b3efc
Here some eas debug logs where undefined
appears for the first time:
eas_1 | verbose: parent request info: {"uri":"https://eas.example.comundefined","parsedUri":{"scheme":"https","host":"eas.example.comundefined","path":"","reference":"absolute"},"parsedQuery":{}}
eas_1 | verbose: audMD5: 3b37aad6a3106ebb7e1bf3ff6f33e857
eas_1 | verbose: cookie name: _eas_oauth_session
Can you confirm the missing part should be the requested
URI?
Thanks!
Hi. I installed the server via helm and I am receiving the error from title after ldap login.
Version:
# helm -n auth-server list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
auth-server auth-server 3 2020-01-15 10:12:52.779433674 +0100 CET deployed external-auth-server-0.1.0 1.0
Relevang logs:
{"service":"external-auth-server","level":"verbose","message":"ldap userinfo: undefined"}
{"level":"error","service":"external-auth-server","message":"Cannot read property 'data' of undefined","stack":"TypeError: Cannot read property 'data' of undefined\n at LdapPlugin.verify (/home/eas/app/src/plugin/ldap/index.js:115:30)\n at process._tickCallback (internal/process/next_tick.js:68:7)"}
{"service":"external-auth-server","level":"info","message":"end verify pipeline with status: 503"}
maybe the check:
external-auth-server/src/plugin/ldap/index.js
Line 112 in c4d9bec
should be userinfo != null
instead of userinfo !== null
?
At the moment, it is required to generate a config token before actually using the app.
This token can be used with the app.
In docker environment, it's best practice to be able to provide everything on runtime in 1 step.
Maybe the config_token variable should be extracted from generate-config-token.js to a different file. And that file might be mounted with docker on runtime. The token will be calculated in the entrypoint, just before the app itself runs.
We use this config for several ingress configurations:
# traefik ingress
ingress.kubernetes.io/auth-type: forward
ingress.kubernetes.io/auth-url: "https://eas.example.com/verify?config_token=CONFIG_TOKEN_HERE"
ingress.kubernetes.io/auth-response-headers: X-Userinfo, X-Id-Token, X-Access-Token, Authorization
Recently we got this kind of error messages from nginx and apache servers:
400 Bad Request
Request Header Or Cookie Too Large
nginx/1.17.7
It turns out that users which belong to several github organisations have a lot of data in the user info part. From my point of view it would be great to get a limited Info block which would could the loginname, userid, email address, mfa,
I checked the headers and the userinfo part alone is about 4k bytes for my user...
Hi,
I'm using external-auth-server
together with Kubernetes and Traefik v2.1. It's really simple to configure and use.
I had, actually still have one issue which took me hours to find out why it happens exactly.
In my case, I have Home-Assistant as the service which should be protected by external-auth-server
. This application is a progressive web app (PWA)
which means, it uses service-worker
quite heavily.
So, each initial authentication process was working fine (clean browser or incognito tab), because there was no Service Worker
in place. After an hour (the token expired), I get redirected to Google for another sign in. But this time the final cookie cannot be set or read and the requests ends in a 503-error
.
I guess the issue behind is that the two cookies' set have the flag httpOnly
. This disallows the service-worker
from getting/setting the cookies. But I'm not absolutely sure whether this is the only problem. But if I tick the option Bypass for network
in the Chrome debug tools, then everything is working fine.
Is there already such an option to configure which flags should be set by external-auth-server
? If not, would this be a huge task? I know I will lose a bit of security by not setting this flag. But I don't see any other possibility. On the other hand, I'm not very experienced with service-workers
.
Thank you.
Best Danny
I use keycloak for oidc and keycloak currently returns the realm roles in the access token
I would like to be able to assert that the realm roles contains a value
If I can only do this for the id_token, I will have to configure an additional mapper in keycloak for each client and map the realm roles into the id token
I decided to open a new issue for this. It's not related to the new changes #51 and #50. The issue was already there.
It could be difficult for you to reproduce this, as it only happens to websites which have a service-worker
in place. In my case it's home-assistant
.
The very first authentication process in a clean browser window (empty caches or incognito) works always. If I activate on Chrome (desktop) the service-worker setting Bypass for network
, then it's working every time, not only at the first attempt.
In every other case if I browse to my endpoint ha.mydomain.com
, I get redirected to the google's login page. After login, I get redirected back to auth.mydomain.com
and from there to my final website ha.mydomain.com
. But on this final site, I always receive a 503
coming from eas
. I guess the behaviour has some dependencies to wrong or missing csrf
-cookies. Please find bellow the logs in verbose
mode. I tried to remove as little information as possible. I guess there are still too many private information inside. The log abstract is from the sub-process after I hit login on google's auth page.
{"service":"external-auth-server","level":"verbose","message":"parsed state redirect uri: {\"scheme\":\"https\",\"host\":\"ha.mydomain.com\",\"path\":\"/\",\"reference\":\"absolute\"}"}
{"service":"external-auth-server","level":"verbose","message":"parsed request uri: {\"path\":\"/oauth/callback\",\"query\":\"state=074e40bb50c3b25721f56f3b07cd7ab3b53ce383472acfe8286ddef11b7073f0892cd088c3189bba5758d1740bb39e3bd16216d23d206b7e6e5dabe1b181dbea25a63bbec292125525c46fde1914f2f722b0b126dc2fc62a99f4feccced52130bcf1805568f7523e5a305c6c988c4b152b5b65115e770753694e99f9e46743e31b33b70b22bc58fbe549dcaa447343555a8027fa332b436249bd782268a8316addf3224cd5b3b86f740ddf7551dc5a628d6d6b559fa8f54874b5e0874e2cc00068dbe46c0a4a3e0628e094f85f395d5ae7729e2c263c05edc4d25c6df88ccc6f39c40678b4f5e4830ac5a0a63ca5a795b162dde912a355e4d0ff3a82a1eb2fe29fbe43e5c0f023818f1a2b8f9b331bddade1da04c2699216ab3f6760d76401ca&code=4%2FwAEXNukyYa1KIyEmhX67QWDsimepPfAbK4AG9m6L1SZtLsB5jfoPYuHewaOIKlI65XGIii5rr02ORetzvmij47g&scope=email+profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+openid&authuser=0&prompt=none&session_state=9bf4db09808cd433cedefc5225a695fcf4fca18d..5df4\",\"reference\":\"relative\"}"}
{"service":"external-auth-server","level":"verbose","message":"parsed redirect uri: {\"scheme\":\"https\",\"host\":\"ha.mydomain.com\",\"path\":\"/\",\"query\":\"__eas_oauth_handler__=authorization_callback&authuser=0&code=4%2FwAEXNukyYa1KIyEmhX67QWDsimepPfAbK4AG9m6L1SZtLsB5jfoPYuHewaOIKlI65XGIii5rr02ORetzvmij47g&prompt=none&scope=email%20profile%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email%20openid&session_state=9bf4db09808cd433cedefc5225a695fcf4fca18d..5df4&state=074e40bb50c3b25721f56f3b07cd7ab3b53ce383472acfe8286ddef11b7073f0892cd088c3189bba5758d1740bb39e3bd16216d23d206b7e6e5dabe1b181dbea25a63bbec292125525c46fde1914f2f722b0b126dc2fc62a99f4feccced52130bcf1805568f7523e5a305c6c988c4b152b5b65115e770753694e99f9e46743e31b33b70b22bc58fbe549dcaa447343555a8027fa332b436249bd782268a8316addf3224cd5b3b86f740ddf7551dc5a628d6d6b559fa8f54874b5e0874e2cc00068dbe46c0a4a3e0628e094f85f395d5ae7729e2c263c05edc4d25c6df88ccc6f39c40678b4f5e4830ac5a0a63ca5a795b162dde912a355e4d0ff3a82a1eb2fe29fbe43e5c0f023818f1a2b8f9b331bddade1da04c2699216ab3f6760d76401ca\",\"reference\":\"absolute\"}"}
{"service":"external-auth-server","level":"info","message":"redirecting browser to: \"https://ha.mydomain.com/?__eas_oauth_handler__=authorization_callback&authuser=0&code=4%2FwAEXNukyYa1KIyEmhX67QWDsimepPfAbK4AG9m6L1SZtLsB5jfoPYuHewaOIKlI65XGIii5rr02ORetzvmij47g&prompt=none&scope=email%20profile%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email%20openid&session_state=9bf4db09808cd433cedefc5225a695fcf4fca18d..5df4&state=074e40bb50c3b25721f56f3b07cd7ab3b53ce383472acfe8286ddef11b7073f0892cd088c3189bba5758d1740bb39e3bd16216d23d206b7e6e5dabe1b181dbea25a63bbec292125525c46fde1914f2f722b0b126dc2fc62a99f4feccced52130bcf1805568f7523e5a305c6c988c4b152b5b65115e770753694e99f9e46743e31b33b70b22bc58fbe549dcaa447343555a8027fa332b436249bd782268a8316addf3224cd5b3b86f740ddf7551dc5a628d6d6b559fa8f54874b5e0874e2cc00068dbe46c0a4a3e0628e094f85f395d5ae7729e2c263c05edc4d25c6df88ccc6f39c40678b4f5e4830ac5a0a63ca5a795b162dde912a355e4d0ff3a82a1eb2fe29fbe43e5c0f023818f1a2b8f9b331bddade1da04c2699216ab3f6760d76401ca\""}
{"message":"starting verify pipeline","level":"info","service":"external-auth-server"}
{"service":"external-auth-server","level":"info","message":"starting verify for plugin: oidc"}
{"service":"external-auth-server","level":"verbose","message":"parent request info: {\"uri\":\"https://ha.mydomain.com/?__eas_oauth_handler__=authorization_callback&authuser=0&code=4%2FwAEXNukyYa1KIyEmhX67QWDsimepPfAbK4AG9m6L1SZtLsB5jfoPYuHewaOIKlI65XGIii5rr02ORetzvmij47g&prompt=none&scope=email%20profile%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email%20openid&session_state=9bf4db09808cd433cedefc5225a695fcf4fca18d..5df4&state=074e40bb50c3b25721f56f3b07cd7ab3b53ce383472acfe8286ddef11b7073f0892cd088c3189bba5758d1740bb39e3bd16216d23d206b7e6e5dabe1b181dbea25a63bbec292125525c46fde1914f2f722b0b126dc2fc62a99f4feccced52130bcf1805568f7523e5a305c6c988c4b152b5b65115e770753694e99f9e46743e31b33b70b22bc58fbe549dcaa447343555a8027fa332b436249bd782268a8316addf3224cd5b3b86f740ddf7551dc5a628d6d6b559fa8f54874b5e0874e2cc00068dbe46c0a4a3e0628e094f85f395d5ae7729e2c263c05edc4d25c6df88ccc6f39c40678b4f5e4830ac5a0a63ca5a795b162dde912a355e4d0ff3a82a1eb2fe29fbe43e5c0f023818f1a2b8f9b331bddade1da04c2699216ab3f6760d76401ca\",\"parsedUri\":{\"scheme\":\"https\",\"host\":\"ha.mydomain.com\",\"path\":\"/\",\"query\":\"__eas_oauth_handler__=authorization_callback&authuser=0&code=4%2FwAEXNukyYa1KIyEmhX67QWDsimepPfAbK4AG9m6L1SZtLsB5jfoPYuHewaOIKlI65XGIii5rr02ORetzvmij47g&prompt=none&scope=email%20profile%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email%20openid&session_state=9bf4db09808cd433cedefc5225a695fcf4fca18d..5df4&state=074e40bb50c3b25721f56f3b07cd7ab3b53ce383472acfe8286ddef11b7073f0892cd088c3189bba5758d1740bb39e3bd16216d23d206b7e6e5dabe1b181dbea25a63bbec292125525c46fde1914f2f722b0b126dc2fc62a99f4feccced52130bcf1805568f7523e5a305c6c988c4b152b5b65115e770753694e99f9e46743e31b33b70b22bc58fbe549dcaa447343555a8027fa332b436249bd782268a8316addf3224cd5b3b86f740ddf7551dc5a628d6d6b559fa8f54874b5e0874e2cc00068dbe46c0a4a3e0628e094f85f395d5ae7729e2c263c05edc4d25c6df88ccc6f39c40678b4f5e4830ac5a0a63ca5a795b162dde912a355e4d0ff3a82a1eb2fe29fbe43e5c0f023818f1a2b8f9b331bddade1da04c2699216ab3f6760d76401ca\",\"reference\":\"absolute\"},\"parsedQuery\":{\"__eas_oauth_handler__\":\"authorization_callback\",\"authuser\":\"0\",\"code\":\"4/wAEXNukyYa1KIyEmhX67QWDsimepPfAbK4AG9m6L1SZtLsB5jfoPYuHewaOIKlI65XGIii5rr02ORetzvmij47g\",\"prompt\":\"none\",\"scope\":\"email profile https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email openid\",\"session_state\":\"9bf4db09808cd433cedefc5225a695fcf4fca18d..5df4\",\"state\":\"074e40bb50c3b25721f56f3b07cd7ab3b53ce383472acfe8286ddef11b7073f0892cd088c3189bba5758d1740bb39e3bd16216d23d206b7e6e5dabe1b181dbea25a63bbec292125525c46fde1914f2f722b0b126dc2fc62a99f4feccced52130bcf1805568f7523e5a305c6c988c4b152b5b65115e770753694e99f9e46743e31b33b70b22bc58fbe549dcaa447343555a8027fa332b436249bd782268a8316addf3224cd5b3b86f740ddf7551dc5a628d6d6b559fa8f54874b5e0874e2cc00068dbe46c0a4a3e0628e094f85f395d5ae7729e2c263c05edc4d25c6df88ccc6f39c40678b4f5e4830ac5a0a63ca5a795b162dde912a355e4d0ff3a82a1eb2fe29fbe43e5c0f023818f1a2b8f9b331bddade1da04c2699216ab3f6760d76401ca\"},\"method\":\"GET\"}"}
{"service":"external-auth-server","level":"verbose","message":"audMD5: 8dd2fd4e5d13b05256fd8f1cdf42904a"}
{"service":"external-auth-server","level":"verbose","message":"cookie name: _eas_oauth_session"}
{"service":"external-auth-server","level":"verbose","message":"decoded state: {\"request_uri\":\"https://ha.mydomain.com/\",\"aud\":\"8dd2fd4e5d13b05256fd8f1cdf42904a\",\"csrf\":\"667598ee-890e-4777-a164-d459e85be1a8\",\"iat\":1580754168}"}
{"service":"external-auth-server","level":"verbose","message":"audMD5: 8dd2fd4e5d13b05256fd8f1cdf42904a"}
{"service":"external-auth-server","level":"verbose","message":"cookie name: _eas_oauth_session"}
{"message":"mismatched csrf values","level":"verbose","service":"external-auth-server"}
{"service":"external-auth-server","level":"info","message":"end verify pipeline with status: 503"}
{"message":"starting verify pipeline","level":"info","service":"external-auth-server"}
{"service":"external-auth-server","level":"info","message":"starting verify for plugin: oidc"}
{"service":"external-auth-server","level":"verbose","message":"parent request info: {\"uri\":\"https://ha.mydomain.com/service_worker.js\",\"parsedUri\":{\"scheme\":\"https\",\"host\":\"ha.mydomain.com\",\"path\":\"/service_worker.js\",\"reference\":\"absolute\"},\"parsedQuery\":{},\"method\":\"GET\"}"}
{"service":"external-auth-server","level":"verbose","message":"audMD5: 8dd2fd4e5d13b05256fd8f1cdf42904a"}
{"service":"external-auth-server","level":"verbose","message":"cookie name: _eas_oauth_session"}
{"service":"external-auth-server","level":"verbose","message":"redirect_uri: https://auth.mydomain.com/oauth/callback"}
{"service":"external-auth-server","level":"verbose","message":"callback redirect_uri: https://accounts.google.com/o/oauth2/v2/auth?client_id=myclientid.apps.googleusercontent.com&scope=openid%20email%20profile&response_type=code&access_type=offline&redirect_uri=https%3A%2F%2Fauth.mydomain.com%2Foauth%2Fcallback&state=074e40bb50c3b25721f56f3b07cd7ab3b53ce383472acfe8286ddef11b7073f0892cd088c3189bba5758d1740bb39e3bd16216d23d206b7e6e5dabe1b181dbea25a63bbec292125525c46fde1914f2f704edb801be09e5658087e4c188bd15b68f74be0c0382e8ce388d18237b4ba74132668303563bc8317b08e334384b666b4851aa0a5a4aa2a2dbaefc293657a1da4f2614f4ff516f6b89e39088652be5214244ae38e1555469931f9a68fa99a4b1486cf1d183ee7d520538b9ca292f112224be5f406f5365cbba7ab6162d05ebc3e6130f52998d909c66424937f40d6f4692d844c3fc18b26e009f07d4a528e670299ba0a806bc1479b26fc7f90d0f68d1182ade3a8c5ec62dfba1fdbe994e1ae53f3af39946a2d8416f331503f32a2ed159254f7e8440dc51513f6d30f798dfc8"}
{"service":"external-auth-server","level":"info","message":"end verify pipeline with status: 302"}
To get eas working with a Angular SPA, Javascript needs to access the HTTP-Response that eas gives to redirect the user in case of a invalid or expired session.
Would it be possible to enable eas to set Access-Control-Allow-Origin to "*".
On kubernetes, with an ingress.yaml
annotation as such:
annotations:
# Enable authentication - Forward auth to external-auth-server
ingress.kubernetes.io/auth-type: forward
ingress.kubernetes.io/auth-url: http://external-auth-server.external-auth-server.svc.k8s:8080/verify?fallback_plugin=0&config_token=<URL safe config_token> # Inside cluster
ingress.kubernetes.io/auth-response-headers: X-Userinfo, X-Id-Token, X-Access-Token, Authorization
and an auth pipeline of:
when htpasswd
auth fails, eas
proceeds to ldap
. If the ldap server is unreachable, no response is returned, until traefik finally times out after 240s.
eas
logs:// htpasswd is 'myPassword'
$ kubectl logs --namespace=external-auth-server external-auth-server-76b4ffc4f-7mxg6 --follow
{"service":"external-auth-server","level":"info","message":"starting verify for plugin: htpasswd"}
{"service":"external-auth-server","level":"debug","message":"plugin response {\"statusCode\":401,\"statusMessage\":\"\",\"body\":\"\",\"cookies\":[],\"clearCookies\":[],\"headers\":{\"WWW-Authenticate\":\"Basic realm=\\\"external authentication server\\\"\"},\"authenticationData\":{},\"plugin\":{\"server\":{},\"config\":{\"type\":\"htpasswd\",\"htpasswd\":\"foo:$apr1$qHDFfhPC$nITSVHgYbDAK1Y0acGRnY0\\nbar:$apr1$qHDFfhPC$nITSVHgYbDAK1Y0acGRnY0\\n\",\"pcb\":{}}}}"}
{"service":"external-auth-server","level":"info","message":"starting verify for plugin: ldap"}
{"service":"external-auth-server","level":"verbose","message":"parent request info: {\"uri\":\"https://foo.example.com/favicon.ico\",\"parsedUri\":{\"scheme\":\"https\",\"host\":\"foo.example.com\",\"path\":\"/favicon.ico\",\"reference\":\"absolute\"},\"parsedQuery\":{},\"method\":\"GET\"}"}
{"service":"external-auth-server","level":"verbose","message":"LdapAuth connection closed: undefined"}
Do you really need a full openid provider or would a GitHub oauth application also work?
Currently the docker image for external-auth-server
sits at just over 1.04GB. This can be reduced using multi-staged builds. Here's an initial pass at a Dockerfile for eas:
FROM node:10 AS builder
WORKDIR /root
COPY package*.json ./
RUN npm install
FROM node:10-alpine AS release
# Run as a non-root user
RUN adduser --disabled-password eas \
&& mkdir /home/eas/app \
&& chown -R eas: /home/eas
WORKDIR /home/eas/app
USER eas
COPY --from=builder --chown=eas:eas /root/node_modules ./node_modules
COPY --chown=eas:eas . .
EXPOSE 8080
CMD [ "npm", "start" ]
Using node:10-alpine the resultant image ends up being just under 200MB:
docker images external-auth-server:slim
REPOSITORY TAG IMAGE ID CREATED SIZE
external-auth-server slim bfc66bf5f94c 38 seconds ago 194MB
I haven't tested to see if eas will work with this change, but I thought I'd at least suggest a way to reduce the current eas image by 4/5ths.
Hey Travis, saw your repo referenced here and it looks like this might apply to what I'm working on right now, but I'm not totally clear on the problem you're trying to solve.
I've got a docker swarm where I'm going to route all incoming traffic through Traefik, and expose KeyCloak for auth and a bunch of APIs will be protected with the forward auth Traefik supports.
Is this repo just doing the validation of the auth tokens? Or encrypting auth tokens so they're not visible in the outside world? Or supporting multiple identity providers? All the above? None?
I'm confused what the config token is for? Can you clarify a bit?
Please don't take any of the above negatively, the features list checks all the boxes I need to check, but I'm confused on the inner workings and some of the implementation. Any additional info you could provide would be appreciated. I'll keep digging through the code to understand more.
Here's two of my repos I've done POCs with where I think I'm implementing things right around this boundary:
https://github.com/gregberns/forward-auth
https://github.com/gregberns/identity-provider-demo
HI,
Just wondering if token encryption is strictly necessary.
Is it so that you can safely send tokens over a http connection to the auth server, rather than forcing use of https ?
If that is the reason, then could it be possible to support unencrypted tokens when using https to talk to the auth server - in my case when using istio I can use mutual tls
Hi
I'm using the oidc
auth module to authenticate against Google. My actual problem is, that after an hour I have to re-authenticate against Google. I've read that Google has to be called with the access_type=offline
to get a refresh_token
. But I haven't found any possibility to add this to the configuration. I also investigated on the request in the Chrome dev tools and this param is missing.
In addition I don't use Redis. But I only run a single node k3s cluster with just one instance of Traefik v2 and external-auth-server. I guess in this case Redis is optional.
I use the following configuration, just to check whether all the other stuff is correct:
let config_token = {
aud: "mydomain.io",
eas: {
plugins: [{
type: "oidc",
issuer: {
discover_url: "https://accounts.google.com/.well-known/openid-configuration",
},
client: {
client_id: "myid.apps.googleusercontent.com",
client_secret: "mysecret"
},
scopes: ["openid", "email", "profile"], // must include openid
redirect_uri: "https://auth.domain.io/oauth/callback",
features: {
cookie_expiry: true,
userinfo_expiry: true,
session_expiry: true,
session_expiry_refresh_window: 60 * 30, // Google's access_token expires within 60min
session_retain_id: true,
refresh_access_token: true,
fetch_userinfo: true,
introspect_access_token: false, // Not supported by Google
authorization_token: "access_token"
},
assertions: {
exp: true,
nbf: true,
iss: true,
userinfo: [ {
query_engine: "jp",
query: "$.email",
rule: {
method: "in",
value: ["[email protected]"],
case_insensitive: false
}
} ]
},
cookie: {
domain: "mydomain.io",
},
headers: {},
}]
}
};
Google's access_token
has a validity of 60 minutes.
Thank you.
Best Danny
I am trying to get k3d/traefik/dex/eas working, and thus far I was able to setup a working demonstrator with dex on http(80), but now I'm struggling with trying get the same setup working with dex on https(443) with a self-signed cert. I'm getting "unable to verify the first certificate" using the oidc plugin.
k3d version v1.7.0
k3s version v1.17.3-k3s1
helm v3.1.2
stable/dex 2.21.0 (installed from helm)
external-auth-server (git clone sha 750875c)
I'm using external-auth-server/charts/external-auth-server/ along with a values.yaml that embeds what looks like a way to inject a self-signed cert CA:
nodeExtraCaCerts: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
With kubectl exec
into the externa-auth-server pod, I can see my cert populated in /tmp/certs/node-extra-ca-certs.crt (and verified as the self-signed cert used in my dex deployment), as well as an environment variable populated:
eas@external-auth-server-84bf99f9fc-r4nqn:~$ printenv | grep -i node-extra
NODE_EXTRA_CA_CERTS=/tmp/certs/node-extra-ca-certs.crt
But eas oidc plugin log still gives
{"code":"UNABLE_TO_VERIFY_LEAF_SIGNATURE","level":"error","service":"external-auth-server","message":"unable to verify the first certificate","stack":"Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1058:34)
at TLSSocket.emit (events.js:198:13)
at TLSSocket._finishInit (_tls_wrap.js:636:8)"} {"service":"external-auth-server","level":"info","message":"end verify pipeline with status: 503"}
What is the correct way to get eas with oidc plugin to trust a dex service with a self-signed cert?
Hi,
We like to implement EAS with oidc, because of it's flexibility. In our use case we have web-and and desktop-applications using webservices authenticated by EAS.
Some desktop applications which don't support the Authorization Code Flow, but only basic Authentication . That's why we like to have the "Password credential Flow" supported.
I would like to hear your thoughts on this?
If you're positive about it, I'll give it a try to implement it and create PR.
Currently X-Userinfo can be passed through to applications so that they have data about the logged in user. This data is a JSON-encoded blob.
However, I have some applications (ex. Grafana) that I can only configure to read a header (I can choose the name) that is either a username or e-mail; I cannot configure it to do any JSON parsing and unwrapping.
Would it be possible to add configuration that allows some Userinfo properties to be unwrapped into their own headers?
Right now I use GitHub with oauth. In case of an internet outage or GitHub problem I would like to have a fallback to htpasswd or IP based auth.
Is this already possible?
Correct me if im wrong or if this belongs somewhere completely else, feel free to remove this request all together.
As far as i understand, in order to sign out from key cloak you will need:
I'm a bit reluctant to start hard coding these variables into my back-end, which ideally shouldn't know anything about the whole SSO circus.
Instead, i would suggest (since the EAS already gets all these informations in the config-token supplied by each request) the creation of either:
I'm happy to hear your thoughts on this one :)
Unlike Pomerium and Authelia, this project provides no UI. It is to be combined with an external auth provider(eg Keycloak for self-hosting) and a reverse proxy like Traefik or Nginx. It then upon successful authentication, forwards the credentials via headers(I'm not sure if this differs from others providing OAuth2/OIDC redirect flows with an auth subdomain and UI, or third-party SSO provider).
Is that all correct? I haven't dug into the assertions feature or what I think you've called pipelined auth(mixing OAuth with JWT for example and reacting based on what was successfully used/discovered), putting those aside, what are similar projects and are those features what help set EAS apart?
While the project has support for LDAP and OAuth/OIDC, I assume it doesn't help much with projects like BookStack which allow for third-party auth or LDAP to manage users/login accounts to their service. Is EAS able to be of value here? Do I need to request/contribute support to BookStack project to be compatible with the way EAS works? If so what is required?
For a service to work with EAS, if I understand correctly, it must support receiving user auth via header values? Grafana seems to refer to this as an Auth Proxy, as does the support with Discourse. So... Reverse Proxy
(redirect) => EAS
(forward auth headers on success from auth provider) => Service
(autologin)? I think you have referred to this before as Auth Code Flow
?(still not familiar with different auth flows yet)
In my case, I am looking at moving from services that manage their own individual accounts and each requiring separate login(or linking to a users preferred SSO provider), to a single auth gateway, where a user can link to an external SSO provider(assuming that's compatible with what I'd like), or have all services share a self-hosted auth provider(eg Keycloak), which for services that support it can all share account data via LDAP?
A user joining the community should only have to signup/register at one location, and only link their account(eg to Google) once, not manually link each service to eventually get a single/seamless login experience across all services provided to the community. I'm not sure how achievable this is, or if EAS is suitable to achieve it. I'm still trying to grasp a good enough understanding about the auth and user management technologies available, and what is required of the third-party services to support this(Grafana and Discourse should be ok from the looks of it, but I'm uncertain about BookStack).
Single server with Docker containers for each service currently. nginx-proxy is used, but will need to be customized or switch to Traefik I think for forward auth support, as with nginx it seems to be a module required at build time(which the image nginx-proxy does not have).
EDIT: Seems I misunderstood the Discourse auth proxy, that's the wrong direction I was asking about, as it's redirecting users to login via Discourse SSO to the intended service url. I had thought originally that it was support for passing auth details to Discourse to login from an external auth service.
Hello.
I'm trying to incorporate this into Traefik v2.0 in combination with Keycloak.
Traefik v2.0 allows configuring for a serperate forward authentication middleware, which then can be attached on a per container basis.
I could then, attach this forward authentication middleware, to each container, allowing for single signon to all my microservices.
However, in Keycloak i have to configure a Base URL, Root URL, etc.
This would mean i could only use one URL ?
how should my Client configuration in my Keycloak realm look like in order to make this work ?
Do i need to proxy the EAS port 8080, and configure that as a Root URL,
Can anyone point me into the right direction ?
For the most part the redirects seems to be working. It fails on the external auth server
retrieving tokens.
{"service":"external-auth-server","level":"verbose","message":"audMD5: 8708765772d4ac341e8f85effde0c452"}
{"service":"external-auth-server","level":"verbose","message":"cookie name: _eas_oauth_session"}
{"service":"external-auth-server","level":"verbose","message":"decoded state: {\"request_uri\":\"http://nexus.sandbox.xxxx.com/\",\"aud\":\"8708765772d4ac341e8f85effde0c452\",\"csrf\":\"c8f2c9ca-27b5-4818-a2ed-9692d32b2133\",\"req\":{\"headers\":{}},\"request_is_xhr\":false,\"iat\":1583963627}"}
{"service":"external-auth-server","level":"verbose","message":"audMD5: 8708765772d4ac341e8f85effde0c452"}
{"service":"external-auth-server","level":"verbose","message":"cookie name: _eas_oauth_session"}
{"message":"begin token fetch with authorization code","level":"verbose","service":"external-auth-server"}
{"service":"external-auth-server","level":"verbose","message":"compare_redirect_uri: https://eas.sandbox.xxxxx.com/oauth/callback"}
{"message":"failed to retrieve tokens","level":"verbose","service":"external-auth-server"}
{"level":"error","service":"external-auth-server","message":"Unexpected token P in JSON at position 0","stack":"SyntaxError: Unexpected token P in JSON at position 0\n at JSON.parse (<anonymous>)\n at authenticatedPost.then.then.response (/home/eas/app/node_modules/openid-client/lib/client.js:801:43)\n at process._tickCallback (internal/process/next_tick.js:68:7)"}
Here's the config_token:
{
type: "oidc",
issuer: {
discover_url: http://keycloak.xxxx.com/auth/realms/master/.well-known/uma2-configuration,
},
client: {
client_id: "eas",
client_secret: "xxxx"
},
scopes: ["openid", "email", "profile"], // must include openid
redirect_uri: "https://eas.xxxx.com/oauth/callback",
features: {
cookie_expiry: false,
userinfo_expiry: true,
session_expiry: true,
session_expiry_refresh_window: 86400,
session_retain_id: true,
refresh_access_token: true,
fetch_userinfo: true,
introspect_access_token: false,
authorization_token: "access_token"
},
assertions: {
exp: true,
nbf: true,
iss: true,
},
cookie: {
domain: "xxxx.com", //defaults to request domain, could do sso with more generic domain
},
headers: {},
}
I have setup Traefik and eas based on the Howto. I have the authentication with Google oauth working. Request comes back to eas server, it then forwards to the original URL but that fails with a 503 code.
The original URL webserver POD does not get the request. For some reason the 503 is generated by eas server. It looks like the succesfull authentication is not recognized.
{"message":"starting verify pipeline","level":"info","service":"external-auth-server"}
{"service":"external-auth-server","level":"info","message":"starting verify for plugin: oauth2"}
{"service":"external-auth-server","level":"info","message":"redirecting to original resource: https://<original-URL>/"}
{"service":"external-auth-server","level":"info","message":"end verify pipeline with status: 302"}
{"message":"starting verify pipeline","level":"info","service":"external-auth-server"}
{"service":"external-auth-server","level":"info","message":"starting verify for plugin: oauth2"}
{"level":"error","service":"external-auth-server","message":"Cannot read property 'data' of undefined","stack":"TypeError: Cannot read property 'data' of undefined\n at OauthPlugin.prepare_token_headers (/home/eas/app/src/plugin/oauth/index.js:790:30)\n at OauthPlugin.verify (/home/eas/app/src/plugin/oauth/index.js:670:24)\n at process._tickCallback (internal/process/next_tick.js:68:7)"}
{"service":"external-auth-server","level":"info","message":"end verify pipeline with status: 503"}
How can I debug the error and figure out the solution?
Reference #16 (comment)
Here are my eas logs (basically empty):
kubectl logs eas-external-auth-server-7479497c4c-lfr2q
> [email protected] start /home/eas/app
> node --nouse-idle-notification --expose-gc --max-old-space-size=8192 src/server.js
{"service":"external-auth-server","level":"debug","message":"cache opts: {\"store\":\"memory\",\"max\":0,\"ttl\":0}"}
{"service":"external-auth-server","level":"info","message":"revoked JTIs: []"}
{"service":"external-auth-server","level":"info","message":"starting server on port 8080"}
Could it be that token is wrong in a way and that's why ambassador can't talk to eas service?
here is my token config:
const jwt = require("jsonwebtoken");
const utils = require("../src/utils");
const config_token_sign_secret =
process.env.EAS_CONFIG_TOKEN_SIGN_SECRET ||
utils.exit_failure("missing EAS_CONFIG_TOKEN_SIGN_SECRET env variable");
const config_token_encrypt_secret =
process.env.EAS_CONFIG_TOKEN_ENCRYPT_SECRET ||
utils.exit_failure("missing EAS_CONFIG_TOKEN_ENCRYPT_SECRET env variable");
let config_token = {
/**
* future feature: allow blocking certain token IDs
*/
//jti: <some known value>
/**
* using the same aud for multiple tokens allows sso for all services sharing the aud
*/
//aud: "some application id", //should be unique to prevent cookie/session hijacking, defaults to a hash unique to the whole config
eas: {
plugins: [{
type: "oidc",
issuer: {
/**
* via discovery (takes preference)
*/
discover_url: "https://dev.hal24k.nl/.well-known/openid-configuration",
/**
* via manual definition
*/
//issuer: 'https://accounts.google.com',
//authorization_endpoint: 'https://accounts.google.com/o/oauth2/v2/auth',
//token_endpoint: 'https://www.googleapis.com/oauth2/v4/token',
//userinfo_endpoint: 'https://www.googleapis.com/oauth2/v3/userinfo',
//jwks_uri: 'https://www.googleapis.com/oauth2/v3/certs',
},
client: {
/**
* manually defined (preferred)
*/
client_id: "k8s_ambassador",
client_secret: "secretsecret"
/**
* via client registration
*/
//registration_client_uri: "",
//registration_access_token: "",
},
scopes: ["openid", "email", "profile"], // must include openid
/**
* static redirect URI
* if your oauth provider does not support wildcards place the URL configured in the provider (that will return to this proper service) here
*/
redirect_uri: "https://eas.hal24k.nl:8443/oauth/callback",
features: {
/**
* how to expire the cookie
* true = cookies expire will expire with tokens
* false = cookies will be 'session' cookies
* num seconds = expire after given number of seconds
*/
cookie_expiry: false,
/**
* how frequently to refresh userinfo data
* true = refresh with tokens (assuming they expire)
* false = never refresh
* num seconds = expire after given number of seconds
*/
userinfo_expiry: true,
/**
* how long to keep a session (server side) around
* true = expire with tokenSet (if applicable)
* false = never expire
* num seconds = expire after given number of seconds (enables sliding window)
*
* sessions become a floating window *if*
* - tokens are being refreshed
* or
* - userinfo being refreshed
* or
* - session_expiry_refresh_window is a positive number
*/
session_expiry: true,
/**
* window to update the session window based on activity if
* nothing else has updated it (ie: refreshing tokens or userinfo)
*
* should be a positive number less than session_expiry
*
* For example, if session_expiry is set to 60 seconds and session_expiry_refresh_window value is set to 20
* then activity in the last 20 seconds (40-60) of the window will 'slide' the window
* out session_expiry time from whenever the activity occurred
*/
session_expiry_refresh_window: 86400,
/**
* will re-use the same id (ie: same cookie) for a particular client if a session has expired
*/
session_retain_id: true,
/**
* if the access token is expired and a refresh token is available, refresh
*/
refresh_access_token: true,
/**
* fetch userinfo and include as X-Userinfo header to backing service
*/
fetch_userinfo: true,
/**
* check token validity with provider during assertion process
*/
introspect_access_token: false,
/**
* which token (if any) to send back to the proxy as the Authorization Bearer value
* note the proxy must allow the token to be passed to the backend if desired
*
* possible values are id_token, access_token, or refresh_token
*/
authorization_token: "id_token"
},
assertions: {
/**
* assert the token(s) has not expired
*/
exp: true,
/**
* assert the 'not before' attribute of the token(s)
*/
nbf: true,
/**
* assert the correct issuer of the token(s)
*/
iss: true,
/**
* custom userinfo assertions
*/
userinfo: [
// {
// ...
// see ASSERTIONS.md for details
// },
// {
// ...
// }
],
/**
* custom id_token assertions
*/
id_token: [
// {
// ...
// see ASSERTIONS.md for details
// },
// {
// ...
// }
]
},
cookie: {
//name: "_my_company_session",//default is _oeas_oauth_session
//domain: "example.com", //defaults to request domain, could do sso with more generic domain
//path: "/",
},
// see HEADERS.md for details
headers: {},
},], // list of plugin definitions, refer to PLUGINS.md for details
}
};
config_token = jwt.sign(config_token, config_token_sign_secret);
const conifg_token_encrypted = utils.encrypt(
config_token_encrypt_secret,
config_token
);
//console.log("token: %s", config_token);
//console.log("");
console.log("encrypted token (for server-side usage): %s", conifg_token_encrypted);
console.log("");
console.log(
"URL safe config_token: %s",
encodeURIComponent(conifg_token_encrypted)
);
console.log("");
Currently token generation requires editing source files.
One idea I may suggest is to add an additional entrypoint into the docker container for token generation and you write the config to stdin and read the generated token from stdout
I'm looking to use this project in a kubernetes cluster with istio, and I think writing a kustomize generator would be a good integration point for token generation - so calling into the docker container from the kustomize plugin seems like a good option
The title is bit overly broad but over the last few days I've been trying to track down a long-standing issue where clients need to login after about an hour or so. It's not clear to me if my setup is just misconfigured or if there's a real bug here, but I wanted to open a thread to investigate. At the very least I think the documentation about refresh tokens needs to better spell out how to configure things.
My current config is (minus some unrelated stuff):
{
/**
* how to expire the cookie
* true = cookies expire will expire with tokens
* false = cookies will be 'session' cookies
* num seconds = expire after given number of seconds
*/
cookie_expiry: 60 * 60 * 24 * 30,
/**
* how frequently to refresh userinfo data
* true = refresh with tokens (assuming they expire)
* false = never refresh
* num seconds = expire after given number of seconds
*/
userinfo_expiry: false,
/**
* how long to keep a session (server side) around
* true = expire with tokenSet (if applicable)
* false = never expire
* num seconds = expire after given number of seconds (enables sliding window)
*
* sessions become a floating window *if*
* - tokens are being refreshed
* or
* - userinfo being refreshed
* or
* - session_expiry_refresh_window is a positive number
*/
session_expiry: 60 * 60 * 24 * 30,
/**
* will re-use the same id (ie: same cookie) for a particular client if a session has expired
*/
session_retain_id: true,
/**
* if the access token is expired and a refresh token is available, refresh
*/
refresh_access_token: true,
/**
* which token (if any) to send back to the proxy as the Authorization Bearer value
* note the proxy must allow the token to be passed to the backend if desired
*
* possible values are access_token, or refresh_token
*/
authorization_token: "access_token",
/**
* fetch userinfo and include as X-Userinfo header to backing service
* only helpful if your specific provider has been implemented
*/
fetch_userinfo: true,
}
The reason cookie_expiry
and session_expiry
are set to the same value is due to #43, but I think you'd basically always want that anyway. It doesn't really max sense to have the cookie ever expire sooner than the session, at least if I understand correctly.
The reason the expiry is 30 days is because that's how long the refresh token I'm provided lasts. The access_token
I'm provided is good for 60 minutes.
With all of this configured I'm still seeing users needing to login again after a couple of hours. I am using redis to persist session data so it should tolerate eas being restarted and what not.
My current theory is that refresh works once. So basically after an hour from the initial login the refresh token get used to get a new access_token
but then someone isn't persisting properly and an hour after that login fails again. I haven't proven this is the case yet though.
Hello,
First of all let me thank you for all your work and good product!
Do you have any plans to add an option to not use encryption on server-side tokens?
We're deploying our clusters quite frequently and encryption for tokens isn't always working for us. We're trying to avoid node scripts (for our own reasons), and were using openssl script at first, and python script currently. Rarely we have digital envelope routines
errors, after token creation, hence the need of regenerating of tokens.
We're using only server-side tokens. Don't get me wrong, but I don't see much sense of using encryption on server-side tokens.
I know I am keeping you busy :) here's one more use case: doing EAS-protected API testing or accessing it from code https://auth0.com/docs/flows/concepts/client-credentials. (auth0 is just an example provider)
For now, we are using JWT plugin for something like that, but users have to get access_token from somewhere first before they can call API behind the EAS, I think client credential flow is a more proper way to do it.
Hi,
So my setup is:
Ambassador v: 0.72.0
Keycloak v: 5.0.0
And external-auth-server as middle-ware providing openid-connect authentication to a webservice.
I followed the setup first for the Traefik example, substituting config where it was needed.
My configuration for the external-auth-server values.yaml looks as follows:
configTokenSignSecret: ZQTJMHgsRUYD4vaDJnmutYMU
configTokenEncryptSecret: MAuZtmwxfvcJCabSmhrvcAjv
issuerSignSecret: mEAYPr9bcZk8dFda8T6dBmzK
issuerEncryptSecret: QjjgXyVvVyVyUfBW6ZhFTG3w
cookieSignSecret: 5spCwm2jCtekwEb3G6Hcnav8
cookieEncryptSecret: YwX3gzTZmPNUL9v5RMhwsnZq
sessionEncryptSecret: jMV5WtUCfSme4xx9Qkmu2Jv2
logLevel: "info"
redis-ha:
enabled: false
In addition i made a couple of changes to the service.yaml to accommodate ambassadors annotation-based configuration.... These looks like this:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: AuthService
name: authentication
auth_service: {{ include "external-auth-server.fullname" . }}
proto: http
allowed_request_headers:
- authorization
include_body:
max_bytes: 4096
allow_partial: true
---
apiVersion: ambassador/v1
kind: Mapping
name: eas_mapping
prefix: /eas/
bypass_auth: true
service: {{ include "external-auth-server.fullname" . }}
I managed to create a CONFIG_TOKEN for the keycloak.
The configuration for the plugin added to generate-config-token.js that i used was:
{
type: "oidc",
issuer: {
/**
* via discovery (takes preference)
*/
discover_url: "http://keycloak.default.svc.cluster.local/auth/realms/master/.well-known/openid-configuration",
},
client: {
/**
* manually defined (preferred)
*/
client_id: "lightningbadger",
client_secret: "4dbd29da-5b81-417e-a230-abad914de57e"
/**
* via client registration
*/
//registration_client_uri: "",
//registration_access_token: "",
},
scopes: ["openid", "email", "profile"], // must include openid
/**
* static redirect URI
* if your oauth provider does not support wildcards place the URL configured in the provider (that will return to this proper service) here
*/
redirect_uri: "http://api.lightningbadger.io/eas/oauth/callback",
features: {
/**
* how to expire the cookie
* true = cookies expire will expire with tokens
* false = cookies will be 'session' cookies
* num seconds = expire after given number of seconds
*/
cookie_expiry: true,
/**
* how frequently to refresh userinfo data
* true = refresh with tokens (assuming they expire)
* false = never refresh
* num seconds = expire after given number of seconds
*/
userinfo_expiry: true,
/**
* how long to keep a session (server side) around
* true = expire with tokenSet (if applicable)
* false = never expire
* num seconds = expire after given number of seconds (enables sliding window)
*
* sessions become a floating window *if*
* - tokens are being refreshed
* or
* - userinfo being refreshed
* or
* - session_expiry_refresh_window is a positive number
*/
session_expiry: true,
/**
* window to update the session window based on activity if
* nothing else has updated it (ie: refreshing tokens or userinfo)
*
* should be a positive number less than session_expiry
*
* For example, if session_expiry is set to 60 seconds and session_expiry_refresh_window value is set to 20
* then activity in the last 20 seconds (40-60) of the window will 'slide' the window
* out session_expiry time from whenever the activity occurred
*/
session_expiry_refresh_window: 86400,
/**
* will re-use the same id (ie: same cookie) for a particular client if a session has expired
*/
session_retain_id: true,
/**
* if the access token is expired and a refresh token is available, refresh
*/
refresh_access_token: true,
/**
* fetch userinfo and include as X-Userinfo header to backing service
*/
fetch_userinfo: true,
/**
* check token validity with provider during assertion process
*/
introspect_access_token: false,
/**
* which token (if any) to send back to the proxy as the Authorization Bearer value
* note the proxy must allow the token to be passed to the backend if desired
*
* possible values are id_token, access_token, or refresh_token
*/
authorization_token: "access_token"
},
assertions: {
/**
* assert the token(s) has not expired
*/
exp: true,
/**
* assert the 'not before' attribute of the token(s)
*/
nbf: true,
/**
* assert the correct issuer of the token(s)
*/
iss: true,
/**
* custom userinfo assertions
*/
userinfo: [
],
/**
* custom id_token assertions
*/
id_token: [
]
},
cookie: {
//name: "_my_company_session",//default is _oeas_oauth_session
//domain: "example.com", //defaults to request domain, could do sso with more generic domain
//path: "/",
},
// see HEADERS.md for details
headers: {},
}
However, here comes the part on which i got stuck. The examples provided in this repo uses Traefik configuration and adds this to the
ingress.kubernetes.io/auth-url: https://eas.example.com/verify?fallback_plugin=0&config_token=PLACE_CONFIG_TOKEN_OUTPUT_HERE
However, since we are using ambassador this is not an option.
So I expect I need to mount the configuration into a secret with the env adapter.
Which i tried without any luck.
So i tried adding the following to my values.yaml
configTokenStores:
primary:
adapter: env
options:
cache_ttl: 3600
var: < I HAVE NO IDEA WHAT TO ADD HERE>
configTokens:
1: <MY CONFIG TOKEN NOT URL ENCODED >
Can you maybe help me out with this configuration :)
Then i can maybe help with writing some documentation :P
This issue is about logging in with OIDC. It seems that EAS gets confused if the login URL (appliation URL) contains a dash (#), e.g.:
https://myserver.dummy.org/client#/overzicht?license=cafe&place=amsterdam
If you open that URL, EAS will redirect the browser to the authorization endpoint of the OpenID provider. The redirect URL contains, amongst other things, a "redirect_uri" parameter for the callback and an encoded "state" parameter.
I suspect that the state parameter contains a truncated version of the application URL: it doesn't include the part after the dash (i.e. "#/overzicht?license=cafe&place=amsterdam").
Upon receiving the redirect, the browser opens the redirect URL but then also appends the dash part to it:
redirect-URL + "#/overzicht?license=cafe&place=amsterdam"
Once logged in, the browser will then be redirected to the truncated application URL:
https://myserver.dummy.org/client
I hope someone can confirm that the dash is indeed causing the problem, and if there is any workaround or fix for it.
Trying to make eas work with istio, almost got it working (I will share docs later), but getting undefined appended to redirect URI for some reason, any idea what that may be?
Logs from eas (notice bookinfo.dev.k8s.hal24k.nlundefined
):
{"service":"external-auth-server","level":"verbose","message":"parsed state redirect uri: {\"scheme\":\"http\",\"host\":\"bookinfo.dev.k8s.hal24k.nlundefined\",\"path\":\"\",\"reference\":\"absolute\"}"}
{"service":"external-auth-server","level":"verbose","message":"parsed request uri: {\"path\":\"/oauth/callback\",\"query\":\"__eas_oauth_handler__=authorization_callback&code=b06e7a405446796a4ea28f2ab24bf306391745767af1a809215a1017eed9da57&scope=openid%20email%20profile&state=05ee47f92a8f54c2bbdacefeab9691d14b5b2c1b80d2e8ddeeac5f0267f9ad62e4551aa461587aeaf85b295d0d585170745e083ab90d4e39e53bf5e93a456feb204a9af6bb0cea606164fdcab94bc54&session_state=5iumlRGmL2bNBJjetsd9lSLjPHrjZeT5Uwor_TfkGaM.6b769f0d8d051e6b9c81221e413f82a1\",\"reference\":\"relative\"}"}
{"service":"external-auth-server","level":"verbose","message":"parsed redirect uri: {\"scheme\":\"http\",\"host\":\"bookinfo.dev.k8s.hal24k.nlundefined\",\"path\":\"\",\"query\":\"__eas_oauth_handler__=authorization_callback&code=b06e7a405446796a4ea28f2ab24bf306391745767af1a809215a1017eed9da57&scope=openid%20email%20profile&state=05ee47f92a8f54c2bbdacefeab9691d14b5b2c1b80d2e8ddeeac5f0267f9ad62e4551aa461587aeaf85b295cd8cd6b63e8af7e76060fb34f00f21efc08d5dd4995635828e53bf5e93a456feb204a9af6bb0cea606164fdcab94bc54&session_state=5iumlRGmL2bNBJjetsd9lSLjPHrjZeT5Uwor_TfkGaM.6b769f0d8d051e6b9c81221e413f82a1\",\"reference\":\"absolute\"}"}
{"service":"external-auth-server","level":"info","message":"redirecting browser to: \"http://bookinfo.dev.k8s.hal24k.nlundefined/?__eas_oauth_handler__=authorization_callback&
Somewhere here, but not sure exactly what is the problem
Hi,
I am using your external-auth-server (OIDC plugin only) with NGINX and an IdP that, after login, execute a post to: https://my.domain.com/oauth/callback?state=abc123&session_state=def567&code=my-code
This call fails with error 400
and message
Cannot POST /oauth/callback
How can I solve this?
First of thanks for creating this!
Would it be possible to publish the helm chart in a helm repo so that a regular helm install
can be used to install without having to download and manage files locally?
I know it's a bit of work to get charts into the stable repo, but you can self-host a helm repo right here using GitHub pages actually. Here is a tutorial on how to do it: https://medium.com/@mattiaperi/create-a-public-helm-chart-repository-with-github-pages-49b180dbb417
Is there any sample how to configure gitlab as a provider?
It looks like gitlab also uses team ids, it would be great if we could create a sample like this for gitlab:
https://github.com/travisghansen/external-auth-server/blob/master/contrib/generate-config-helm-traefik-github.js
Thanks for this project! It's super useful.
We got really stuck for a little while in an infinite auth loop when trying to use this from an envoy ext_authz filter (similar to your examples: istio example, #23 (comment) ).
The ultimate fix ended up being to set pathPrefix
to /envoy/verify-params-header/anythingherewillwork
. That makes it correctly match the express route:
external-auth-server/src/server.js
Line 425 in 154218a
/
on the path will work)
I'm not sure what made those previous examples work with just /envoy/verify-params-header
, maybe something automatically appended to the paths?
I think a good change to make this easier to use, would be to update the route to just /envoy/verify-params-header
, but I'm not sure if that's how you intended it to be used.
Here's the full filter we ended up with, in case it's useful:
http_filters:
- name: envoy.ext_authz
config:
failure_mode_allow: false
http_service:
path_prefix: /envoy/verify-params-header/anythingherewillwork
authorization_request:
allowed_headers:
patterns:
- exact: cookie
- exact: X-Forwarded-Host
- exact: X-Forwarded-Method
- exact: X-Forwarded-Proto
- exact: X-Forwarded-Uri
headers_to_add:
- key: "x-eas-verify-params"
value: '{"config_token_store_id": "env_token_store", "config_token_id": "token_id_1"}'
server_uri:
uri: http://external-auth-server.internal-service.svc.cluster.local
cluster: ext-authz
timeout: 10s
status_on_error:
code: Forbidden
with_request_body:
allow_partial_message: true
max_request_bytes: 4096
- name: envoy.router
typed_config: {}
This works excellent for us with OIDC and GCP Identity Platform! ๐
Hi Travis,
I would like to use the oidc access token introspection feature for a project. But it is not working when using a discover_url. In the eas log I this error message:
{"message":"issuer does not support introspection","level":"error","service":"external-auth-server"}
In our oidc metadata there is an "introspection_endpoint" provided. This is according correct to https://tools.ietf.org/html/rfc8414 .
I assume it is caused by this line:
https://github.com/travisghansen/external-auth-server/blob/master/src/plugin/oauth/index.js#L1254
I guess it should be
if (!issuer.metadata.introspection_endpoint) {
Instead of:
if (!issuer.metadata.token_introspection_endpoint) {
Creating this issue to simply provide a forum for discussing the initial setup.
It's been a long time since the last issue =D
We did another hack to your source code and want to discuss if you think something similar to our hack may be turned into a feature. I will try to explain and would like to hear your thoughts about this. I know it may be too specific to our use-case but let's see.
Last successful setup:
token_id
per tenant which has to be retrieved from redis. Each filter is applied to istio sidecar, so to specific app, not in general to all publicly exposed services behind ingress.redirect_url
for all tokensIgnoring the why
, we thought it would be great to retrieve token dynamically from request subdomain (or URL path for that matter). Then, we can use single filter on ingress level to handle different tokens (token name = tenant name), not on specific sidecar of the service.
So, my colleague added this piece of code to server.js
:
if (easVerifyParams.config_token_regex) {
let matches = req.get("host").match(new RegExp(easVerifyParams.config_token_regex))
if (matches && matches[1]) {
easVerifyParams.config_token_id = matches[1]
} else {
externalAuthServer.logger.error("config_token_regex: unexpected number of matches (%j)", matches)
}
}
and then we changed envoyfilter config to this:
headers_to_add:
- key: x-eas-verify-params
value: '{"config_token_store_id":"primary", "config_token_regex": ".*\\.(.*)\\.k8s.*"}'
Basically, we are using regexp to get token name from URL. Then, you can split applications (or tenants, or users) with a single filter if some part or URL matches config_token_id
.
The last important bit is that we had a single redirect URL for all tokens like https://istio-eas.hal24k.nl/oauth/callback
It caused a problem like this:
starting verify pipeline
logic was working fine because request host was https://jupyterhub.tenant-354.k8s.dimension.ws/hub/spawn
and we could capture regexp group.https://istio-eas.hal24k.nl/oauth/callback
and starting verify pipeline
would not extract config_token_id
and the rest would fail.So, while configuring token, we use specific redirect URL per tenant and point all of them to eas service. The only reason for that is to get tenant name in the host header.
So, current working set up:
token_id_regexp
redirect_url
per tenant with include tenant name in the host (but could be url path as well)Please let us know if you find this interesting and maybe you have better idea how to set this up. So, this could be like another case for config token logic - to use regexp.
I am trying to setup a new EAS on k8s. On this cluster I already have redis up and running. When I run the helm install with:
--set redis-ha.enabled=true
--set redis-ha.auth=true
--set redis-ha.redisPassword=[...]
--set storeOpts.store=ioredis
--set storeOpts.password=[...]
--set storeOpts.name=mymaster
--set storeOpts.sentinels[0].host=redis
--set storeOpts.sentinels[0].port=26379
--set storeOpts.keyPrefix="eas:" \
Redis is installed as part of the Helm chart. How do I point EAS to my existing redis without installing Redis together with EAS?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.