Git Product home page Git Product logo

gce_metadata_server's Introduction

GCE Metadata Server Emulator

Background

This script acts as a GCE's internal metadata server.

It returns a live access_token that can be used directly by Application Default Credentials transparently.

For example, you can use ADC with metadata or ComputeCredentials on your laptop:

#!/usr/bin/python

from google.cloud import storage
import google.auth

import google.auth.compute_engine
import google.auth.transport.requests

## with ADC
credentials, project = google.auth.default()    
client = storage.Client(credentials=credentials)
buckets = client.list_buckets()
for bkt in buckets:
  print(bkt)

## direct
creds = google.auth.compute_engine.Credentials()
request = google.auth.transport.requests.Request()
session = google.auth.transport.requests.AuthorizedSession(creds)
r = session.get('https://www.googleapis.com/userinfo/v2/me').json()
print(str(r))

or

This is useful to test any script or code locally that my need to contact GCE's metadata server for custom, user-defined variables or access_tokens.

Another usecase for this is to verify how Application Defaults will behave while running a local docker container. A local running docker container will not have access to GCE's metadata server but by bridging your container to the emulator, you are basically allowing GCP API access directly from within a container on your local workstation (vs. running the code comprising the container directly on the workstation and relying on gcloud credentials (not metadata)).

You can also run this as a service inside a kubernetes cluster and allow any other pod virtual access to GCP metadata server without even running in GCP.

The metadata server supports additional endpoints that simulate other instance attributes normally only visible inside a GCE instance like instance_id, disks, network-interfaces and so on.

See

This is not an officially supported Google product

For more information on the request-response characteristics:

and

The script performs the following:

  • returns the access_token provided by either
    • the serviceAccount JSON file you specify.
    • workload identity federation configuration
    • service account impersonation
    • statically from a provided environment variable
    • service account RSA key on HSM or Trusted Platform Module (TPM)
  • return id_token
  • return project attributes (project_id, numeric-project-id)
  • return instance attributes (instance-id, tags, network-interfaces, disks)

The endpoints that are exposed are:

r.Handle("/computeMetadata/v1/project/project-id")
r.Handle("/computeMetadata/v1/project/numeric-project-id")
r.Handle("/computeMetadata/v1/project/attributes/{key}")

r.Handle("/computeMetadata/v1/instance/service-accounts/")
r.Handle("/computeMetadata/v1/instance/service-accounts/{acct}/")
r.Handle("/computeMetadata/v1/instance/service-accounts/{acct}/{key}")
r.Handle("/computeMetadata/v1/instance/network-interfaces/{index}/access-configs/{index2}/{key}")
r.Handle("/computeMetadata/v1/instance/attributes/{key}")
r.Handle("/computeMetadata/v1/instance/{key}")
r.Handle("/")


Note, the real metadata server has some additional query parameters which are either partially or not implemented:

You are free to expand on the endpoints surfaced here..pls feel free to file a PR!

  • images/metadata_proxy.png

Usage

This script runs a basic webserver and responds back as the Google Compute Engine's metadata server. A local webserver runs on a non-privileged port (default: 8080) and uses a serviceAccountFile, service account impersonation or GCP workload federation to return GCP access_token, id_token and optional live project user-defined metadata

You can run the emulator:

  1. directly on your laptop
  2. within a docker container running locally.
  3. as a kubernetes service
  4. and with some difficulty, using a link-local address (169.254.169.254)

Configuration

The metadata server reads a configuration file for static values and uses a service account for dynamically getting access_token and id_token.

The basic config file format roughly maps the uri path of the actual metadata server and the emulator uses these values to populate responses.

For example, the instance_id, project_id, serviceAccountEmail and other files are read from the values here, for example, see config.juson:

{
  "computeMetadata": {
    "v1": {
      "instance": {
        "id": 5775171277418378000,
        "serviceAccounts": {
          "default": {
            "aliases": [
              "default"
            ],
            "email": "[email protected]",
            "scopes": [
              "https://www.googleapis.com/auth/cloud-platform",
              "https://www.googleapis.com/auth/userinfo.email"
            ]
          }
        }
      },
      "oslogin": {},
      "project": {
        "numericProjectId": 708288290784,
        "projectId": "your-project"
      }
    }
  }
}

The field are basically a JSON representation of what the real metadata server returns recursively

$ curl -v -H 'Metadata-Flavor: Google' http://metadata/computeMetadata/v1/?recursive=true | jq '.'

Any requests for an access_token or an id_token are dynamically generated using the credential provided. The scopes for any token uses the values set in the config file

Usage

The following steps details how you can run the emulator on your laptop.

Option Description
-configFile configuration File (default: config.json)
-port port to listen on (default: :8080)
-serviceAccountFile path to serviceAccount json Key file
-impersonate use impersonation
-federate use workload identity federation
-tpm use TPM
-persistentHandle TPM persistentHandle
-domainsocket listen on unix socket
GCE_METADATA_HOST environment variable for SDK libraries to point to the metadata server (as host:port)
GOOGLE_PROJECT_ID static environment variable for PROJECT_ID to return
GOOGLE_NUMERIC_PROJECT_ID static environment variable for the numeric project id to return
GOOGLE_ACCESS_TOKEN static environment variable for access_token to return
GOOGLE_ID_TOKEN static environment variable for id_token to return

With JSON ServiceAccount file

Create a GCP Service Account JSON file (you should strongly prefer using impersonation..)

gcloud iam service-accounts create metadata-sa

You can either create a key that represents this service account and download it locally

gcloud iam service-accounts keys create metadata-sa.json --iam-account=metadata-sa@$GOOGLE_PROJECT_ID.iam.gserviceaccount.com

or preferably assign your user impersonation capabilities on it (see section below)

You can assign IAM permissions now to the service account for whatever resources it may need to access

mkdir certs/
mv metadata-sa.json certs

go run server.go -logtostderr --configFile=config.json \
  -alsologtostderr -v 5 \
  -port :8080 \
  --serviceAccountFile certs/metadata-sa.json 

With Impersonation

If you use impersonation, the serviceAccountEmail and scopes are taken from the config file's default service account.

First setup impersonation for your user account:

gcloud iam service-accounts \
  add-iam-policy-binding metadata-sa@$GOOGLE_PROJECT_ID.iam.gserviceaccount.com \
  --member=user:`gcloud config get-value core/account` \
  --role=roles/iam.serviceAccountTokenCreator

then,

 go run server.go -logtostderr \
     -alsologtostderr -v 5  -port :8080 \
     --impersonate --configFile=config.json

With Workload Federation

For workload identity federation, you need to reference the credentials.json file as usual:

then just use the default env-var and run:

export GOOGLE_APPLICATION_CREDENTIALS=`pwd`/sts-creds.json
go run server.go -logtostderr --configFile=config.json \
  -alsologtostderr -v 5 \
  -port :8080 --federate 

To use this mode, you must first setup the Federation and then set the environment variable pointing to the ADC file.

for reference, see

where the sts-creds.json file is the generated one you created. For example using the OIDC tutorial above, it may look like

for example, if the workload federation user is mapped to

principal://iam.googleapis.com/projects/1071284184436/locations/global/workloadIdentityPools/oidc-pool-1/subject/[email protected]

then that identity should have the binding to use the metadata service account:

# enable federation for principal://
gcloud iam service-accounts add-iam-policy-binding metadata-sa@$PROJECT_ID.iam.gserviceaccount.com \
    --role roles/iam.workloadIdentityUser \
    --member "principal://iam.googleapis.com/projects/$GOOGLE_NUMERIC_PROJECT_ID/locations/global/workloadIdentityPools/oidc-pool-1/subject/[email protected]"

ultimately, the sts-creds.json will look like (note:, the service_account_impersonation_url value is not present)

{
  "type": "external_account",
  "audience": "//iam.googleapis.com/projects/1071284184436/locations/global/workloadIdentityPools/oidc-pool-1/providers/oidc-provider-1",
  "subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
  "token_url": "https://sts.googleapis.com/v1/token",
  "credential_source": {
    "file": "/tmp/oidccred.txt"
  }
}

where /tmp/oidcred.txt contains the original oidc token

With Trusted Platform Module (TPM)

If the service account private key is bound inside a Trusted Platform Module (TPM), the metadata server can use that key to issue an access_token or an id_token

Note: not all platforms supports this mode. The underlying go-tpm library is only supported on a few of the targets (linux/darwin + amd64,arm64). If you need support for other platforms, one option is to comment the sections for the TPM, remove the library bindings and compile.

Before using this mode, the key must be sealed into the TPM and surfaced as a persistentHandle. This can be done in a number of ways described here:

Basically, you can

  • A download a Google ServiceAccount's json file and embed the private part to the TPM or
  • B Generate a Key ON THE TPM and then import the public part to GCP. or
  • C remote seal the service accounts RSA Private key remotely, encrypt it with the remote TPM's Endorsement Key and load it

B is the most secure but C allows for multiple TPMs to use the same key

Anyway, once the RSA key is present as a handle, start the metadata server using the --tpm flag and set the --persistentHandle= value.

You will also need to set a number of other variables similar to the service account JSON file:

go run server.go -logtostderr --configFile=config.json \
  -alsologtostderr -v 5 \
  -port :8080 \
  --tpm --persistentHandle=0x81008000 

we're using a persistentHandle to save/load the key but a TODO is to load from the context tree from files

Final note: if you run on kubernetes on-prem or outside of GCP managed environments, you can also use a sealed key for GCP access:

While not included in this repo, if you provision a service account's key into the k8s node, you can start the metadata server as shown at the end of this repo but critically, the key it uses can be derived from the TPM itself.

To do this, you would use a combination of the samples shown here where after attestation, you seal an RSA key and then run the metadata server as a pod as described in the section titled Running as Kubernetes Service:

also see:

Startup

On startup, you will see something like:

  • images/setup_2.png

Test access to the metadata server

In a new window, run

curl -v -H 'Metadata-Flavor: Google' --connect-to metadata.google.internal:80:127.0.0.1:8080 \
   http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Metadata-Flavor: Google
< Server: Metadata Server for VM
< X-Frame-Options: 0
< X-Xss-Protection: 0
< Date: Mon, 26 Aug 2019 21:50:09 GMT
< Content-Length: 190
<
{"access_token":"ya29.c.EltxByD8vfv2ACageADlorFHWd2ZUIgGdU-redacted","expires_in":3600,"token_type":"Bearer"}

Please note the scopes used for this token is read in from the declared values in the config file

Using Google Auth clients

GCP Auth libraries support overriding the host/port for the metadata server.

Each language library has their own nuances so please read the sections elow

These are not documented but you can generally just set the value of.

If you intend to use the samples in the examples/ folder, add some viewer permission to list gcs buckets (because this is what all the stuff in the examples/ folder shows)

# note roles/storage.admin is over-permissioned...we only need storage.buckets.list on the project...
gcloud projects add-iam-policy-binding $GOOGLE_PROJECT_ID  \
     --member="serviceAccount:metadata-sa@$GOOGLE_PROJECT_ID.iam.gserviceaccount.com"  \
     --role=roles/storage.admin

then usually just,

export GCE_METADATA_HOST=localhost:8080

and use this emulator. The examples/ folder shows several clients taken from gcpsamples.

Remember to run gcloud auth application-default revoke in any new client library test to make sure your residual creds are not used.

  export GCE_METADATA_HOST=localhost:8080
  export GCE_METADATA_IP=127.0.0.1:8080

  virtualenv env
  source env/bin/activate
  pip3 install -r requirements.txt

  python3 main.py
   export GCE_METADATA_HOST=localhost:8080

   mvn clean install exec:java  -q
  export GCE_METADATA_HOST=localhost:8080

  go run main.go
  export GCE_METADATA_HOST=localhost:8080

  npm i
  node app.js  
  export GCE_METADATA_HOST=localhost:8080

  dotnet restore
  dotnet run

Note, Google.Api.Gax.Platform.Instance().ProjectId requests the full recursive path

  • images/setup_5.png

gcloud

export GCE_METADATA_ROOT=localhost:8080

$ gcloud config list
[component_manager]
disable_update_check = True
[core]
account = [email protected]
project = mineral-minutia-820

gcloud uses a different env-var but if you want to use gcloud auth application-default print-access-token, you need to also use GCE_METADATA_HOST and GCE_METADATA_IP

IDToken

The following endpoints shows how to acquire an IDToken

curl -H "Metadata-Flavor: Google" --connect-to metadata.google.internal:80:127.0.0.1:8080 \
'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://foo.bar'

The id_token will be signed by google but issued by the service account you used

{
  "alg": "RS256",
  "kid": "178ab1dc5913d929d37c23dcaa961872f8d70b68",
  "typ": "JWT"
}.
{
  "aud": "https://foo.bar",
  "azp": "metadata-sa@$PROJECT.iam.gserviceaccount.com",
  "email": "[email protected]",
  "email_verified": true,
  "exp": 1603550806,
  "iat": 1603547206,
  "iss": "https://accounts.google.com",
  "sub": "117605711420724299222"
}

Unlike the real gce metadataserver, this will NOT return the full identity document or license info :(&format=[FORMAT]&licenses=[LICENSES])

Other Runtimes

Run with containers

To access the local emulator from containers

cd examples/container
docker build -t myapp .
docker run -t --net=host -e GCE_METADATA_HOST=localhost:8080  myapp

you can run the server itself directly

docker run \
  -v `pwd`/certs/:/certs/ \
  -p 8080:8080 \
  -t salrashid123/gcemetadataserver \
  -serviceAccountFile /certs/metadata-sa.json \
  -logtostderr -alsologtostderr -v 5 \
  -port :8080 

Running as Kubernetes Service

You can run the emulator as a kubernetes Service and reference it from other pods address by injecting GCE_METADATA_HOST environment variable to the containers:

If you want test this with minikube locally,

## first create the base64encoded form of the service account keydefine a
cat certs/metadata-sa.json | base64  --wrap=0 -
cd examples/kubernetes

then edit metadata.yaml and replace the values:

---
apiVersion: v1
kind: Secret
metadata:
  name: gcp-svc-account
type: Opaque
data:
  metadata-sa.json: "replace with contents of cat certs/metadata-sa.json | base64  --wrap=0 -"

Finally test

minikube start
kubectl apply -f .
minikube dashboard --url
minikube service app-service --url

$ curl -s `minikube service app-service --url`

Number of Buckets: 62

needless to say, the metadata Service should be accessed only form authorized pods

Static environment variables

If you do not have access to certificate file or would like to specify static token values via env-var, the metadata server supports the following environment variables as substitutions. Once you set these environment variables, the service will not look for anything using the service Account JSON file (even if specified)

export GOOGLE_PROJECT_ID=`gcloud config get-value core/project`
export GOOGLE_NUMERIC_PROJECT_ID=`gcloud projects describe $GOOGLE_PROJECT_ID --format="value(projectNumber)"`
export GOOGLE_ACCESS_TOKEN="some_static_token"
export GOOGLE_ID_TOKEN="some_id_token"

for example,

go run server.go -logtostderr  \
   -alsologtostderr -v 5 \
   -port :8080

or

docker run \
  -p 8080:8080 \
  -e GOOGLE_ACCESS_TOKEN=$GOOGLE_ACCESS_TOKEN \
  -e GOOGLE_NUMERIC_PROJECT_ID=$GOOGLE_NUMERIC_PROJECT_ID \
  -e GOOGLE_PROJECT_ID=$GOOGLE_PROJECT_ID \
  -e GOOGLE_ACCOUNT_EMAIL=$GOOGLE_ACCOUNT_EMAIL \
  -e GOOGLE_ID_TOKEN=$GOOGLE_ID_TOKEN \  
  -t salrashid123/gcemetadataserver \
  -port :8080 -logtostderr -alsologtostderr -v 5
curl -v -H "Metadata-Flavor: Google" \
  --connect-to metadata.google.internal:80:127.0.0.1:8080 \
   http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

some_static_token

Extending the sample

You can extend this sample for any arbitrary metadata you are interested in emulating (eg, disks, hostname, etc). Simply add the routes to the webserver and handle the responses accordingly. It is recommended to view the request-response format directly on the metadata server to compare against.

Building with Kaniko

The container image is built using kaniko with the --reproducible flag enabled:

export TAG=...
docker run    -v `pwd`:/workspace -v $HOME/.docker/config.json:/kaniko/.docker/config.json:ro    -v /var/run/docker.sock:/var/run/docker.sock   \
      gcr.io/kaniko-project/executor@sha256:034f15e6fe235490e64a4173d02d0a41f61382450c314fffed9b8ca96dff66b2  \
      --dockerfile=Dockerfile \
      --reproducible \
      --destination "docker.io/salrashid123/gcemetadataserver:$TAG" \
      --context dir:///workspace/

syft packages docker.io/salrashid123/gcemetadataserver:$TAG
skopeo copy  --preserve-digests  docker://docker.io/salrashid123/gcemetadataserver:$TAG docker://docker.io/salrashid123/gcemetadataserver:latest

Using Link-Local address

GCE's metadata server's IP address on GCE is a special link-local address: 169.254.169.254. Certain application default credential libraries for google cloud may reference the metadata server by IP address so we're adding this in.

If you use the link-local address, do not set GCE_METADATA_HOST

if you really want to use the link local address, you have two options: use iptables or socat. Both require some setup as root

first create /etc/hosts:

169.254.169.254       metadata metadata.google.internal

for socat

create an IP alias:

sudo ifconfig lo:0 169.254.169.254 up

relay using socat:

sudo apt-get install socat

sudo socat TCP4-LISTEN:80,fork TCP4:127.0.0.1:8080

for iptables

configure iptables:

iptables -t nat -A OUTPUT -p tcp -d 169.254.169.254 --dport 80 -j DNAT --to-destination 127.0.0.1:8080

Finally, access the endpoint via IP or alias over port :80

curl -v -H 'Metadata-Flavor: Google' \
     http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

If you don't mind running the program on port :80 directly, you can skip the socat and iptables and simply start the emulator on the default http port after setting the /etc/hosts variable.

Using Domain Sockets

You can also start the metadata server to listen on a unix domain socket.

To do this, simply specify --domainsocket= flag pointing to some file (eg --domainsocket=/tmp/metadata.sock). Once you do this, all tcp listeners will be disabled.

To access using curl, use its --unix-socket flag

curl -v --unix-socket /tmp/metadata.sock \
 -H 'Metadata-Flavor: Google' \
   http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

While it works fine with things like curl, the main issue with using domain sockets is that the default GCE_METADATA_HOST variable just listens on tcp

And its awkward to do all the overrides for a GCP SDK to "just use" a domain socket...

If you really wanted to use unix sockets, you can find an example of how to do this in the examples/goapp_unix folder

anyway, just for fun, you can pipe a tcp socket to domain using socat (or vice versa) but TBH, you're now back to where you started with a tcp listener..

socat TCP-LISTEN:8080,fork,reuseaddr UNIX-CONNECT:/tmp/metadata.sock

Bazel Build

If you want to build the server using bazel (eg, deterministic),

## generate dependencies
bazel run :gazelle -- update-repos -from_file=go.mod -prune=true -to_macro=repositories.bzl%go_repositories

## run
bazel run :main -- --configFile=`pwd`/config.json   -alsologtostderr -v 5 -port :8080 --serviceAccountFile=`pwd/certs/metadata-sa.json 

## to build the image
bazel   build  :tar-oci-index
  ## oci image at bazel-bin/tar-oci-index/tarball.tar

Testing

a lot todo here, right...thats just life

$ go test -v 
=== RUN   TestBasePathRedirectHandler
--- PASS: TestBasePathRedirectHandler (0.00s)
=== RUN   TestProjectIDHandler
--- PASS: TestProjectIDHandler (0.00s)
=== RUN   TestAccessTokenHandler
--- PASS: TestAccessTokenHandler (0.00s)
PASS
ok  	github.com/salrashid123/gce_metadata_server	0.045s

gce_metadata_server's People

Contributors

bmenasha avatar mike-m-hsbc avatar mikemoore63 avatar salrashid123 avatar smaftoul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

gce_metadata_server's Issues

Support curl --data-urlencode

When working with your metadata service, I've noticed that when using curl and the option --data-urlencode, it doesn't behave correctly and your mock responds

< HTTP/1.1 400 Bad Request
< Content-Type: text/plain; charset=utf-8
< Metadata-Flavor: Google
< Server: Metadata Server for VM
< X-Content-Type-Options: nosniff
< X-Frame-Options: 0
< X-Xss-Protection: 0
< Date: Sat, 27 Mar 2021 00:06:19 GMT
< Content-Length: 49
< 
Bad Request
* Connection #0 to host 127.0.0.1 left intact
non-empty audience parameter required%                                                                                                                                                                                                                                                                                                                                                                                                                                                                        

Whereas when I'm talking directly the the google metadata service I get a proper response. My command line is as follows: curl -X GET --connect-to metadata.google.internal:80:127.0.0.1:8080 -v -H "Metadata-Flavor: Google" --data-urlencode "audience=https://vault/my-role" --data-urlencode "format=full" --data-urlencode format=full http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity

I receive a proper JWT token when using this command (minus the --connect-to portion because I don't need to override)

Using your mock, I have to actually append it to the URL: curl -X GET --connect-to metadata.google.internal:80:127.0.0.1:8080 -v -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity\?audience\=https://vault/my-role\&format\=full

The escapes are there because the CLI needs to escape shell characters.

Idea: Support for Workload Identity Federation

I wonder if this server could be combined with the federated TokenSource providers in salrashid123/oauth2, to enable workloads running outside of Google Cloud to authenticate with a federated identity while using the ordinary Google Cloud libraries, as described in Accessing Resources from AWS.

The idea would be to have the metadata server serve up the access token produced by a configured TokenSource. For example, this would allow a pod running in EKS (with this server as a sidecar) to use its KSA to assume an AWS IAM role (as described here), and then impersonate a GSA, without any code changes. Does this seem reasonable?

Implement 301 redirect for paths missing trailing /

We see a 301 redirect if we requests valid paths missing a tralining '/' as shown below

curl  -v -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/service-accounts
* Hostname was NOT found in DNS cache
*   Trying 169.254.169.254...
* Connected to metadata.google.internal (169.254.169.254) port 80 (#0)
> GET /computeMetadata/v1/instance/service-accounts HTTP/1.1
> User-Agent: curl/7.38.0
> Host: metadata.google.internal
> Accept: */*
> Metadata-Flavor: Google
> 
< HTTP/1.1 301 Moved Permanently
< Metadata-Flavor: Google
< Location: http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/
< Date: Mon, 09 Jan 2017 19:19:20 GMT
< Content-Type: text/html
* Server Metadata Server for VM is not blacklisted
< Server: Metadata Server for VM
< Content-Length: 47
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: SAMEORIGIN
< 
/computeMetadata/v1/instance/service-accounts/

Container crash "Unable to verify OIDC token oidc: expected audience..."

I have two separate Cloud Run instances that I'm testing (not at exactly the same time per se). When I start up the metadata server container for the first time, I'm able to fetch an OAuth token (to authenticate service-to-service). But when I issue a new request with a different audience the apogiatzis/livereloading container crashes.

For example, I have two cloud run URL's (anonymizing for proprietary reasons):

Steps to reproduce

In docker-compose.yml I've defined:

version: '3'
services:
  proxy:
    build: nginx:mainline
    restart: always

  metadata.google.internal:
    image: salrashid123/gcemetadataserver
    container_name: metadata
    command: "-port :80
              --serviceAccountFile /conf/sa.json
              --tokenScopes https://www.googleapis.com/auth/cloud-platform
             "
    volumes:
      - .secret/:/conf/

In one window, run both containers, i.e. docker-compose up then in a new window, SSH into the proxy container via docker-compose exec proxy bash this is what happens:

root@301894f65b61:/# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://app1-abcde12345.a.run.app
[LONG KEY OUTPUT HERE]

root@301894f65b61:/# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://app2-vwxyz98765.a.run.app
curl: (52) Empty reply from server

Error output in the main docker-compose up window:

2020/06/18 03:39:48 salrashid123/oauth2/google: Unable to verify OIDC token oidc: expected audience "https://app1-abcde12345.a.run.app" got ["https://app2-vwxyz98765.a.run.app"]

Support generating required material to operate

In your documentation, you reference going to Google Cloud Platform and generating an actual service account. Not all situations require this kind of effort, nor would I want to actually use valid generated material in my mock.

It would be better if the service, when not provided ENV VARS or a service account json, to simply generate random material, or have a method to generate and run with random material.

I have tried supplying the ENV VARs with completely random material, however I got HTTP/500 errors so I'm presuming there's some kind of validation inside your mock.

Given the following format:

{
  "type": "service_account",
  "project_id": "[PROJECT]",
  "private_key_id": "[PRIVATE_KEY_ID]",
  "private_key": "-----BEGIN PRIVATE KEY-----\n[MATERIAL]\n...\n[MATERIAL]=\n-----END PRIVATE KEY-----\n",
  "client_email": "[CLIENT_NAME]@[PROJECT].iam.gserviceaccount.com",
  "client_id": "[CLIENT_ID]",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/[CLIENT_NAME]%[PROJECT].iam.gserviceaccount.com"
}

Surely something could be automatically generated so that we wouldn't require users to generate valid credential material right? The only other thing you'd need is a valid project id, and we'd be able to start the mock service. Is there a reason why this is required at all -- would think we would be able to run the mock without any of this period?

static string after refactor

it's not clear how to pass a static string as the service token after the go refactor. is this something that is already implemented in the configuration of the google cloud go library's JSON file?

Token refresh causes concurrent access crash

Noticed that if the access_token in the local gcloud is stale and needs refreshing, a call to the

/computeMetadata/v1/instance/service-accounts/default/token

endpoint also causes the local gcloud instance to also ask for

/computeMetadata/v1/project/numeric-project-id

In the trace below, I only asked for the token which inturn called gcloud...which seems to itself ask for the numeric-project-id almost at precisely the same time (see socat output). Since the local gcloud already thinks its running on the GCE, it asks the metadata emulator itself again...(or atleast thats what i think is going on)

The concurrent request somehow blocks and causes a server crash.

One temp workaround is to launch via gunicorn (two workers default) and that seems to work much better.

TODO: figure out what is actually happening...

python gce_metadata_server.py 
2016-08-17 18:23:36,478 - INFO -  * Running on http://0.0.0.0:18080/
2016-08-17 18:23:40,601 - INFO - Requesting ServiceAccount : default/token
2016-08-17 18:23:40,602 - INFO - token not found in cache, refreshing..
2016-08-17 18:23:41,636 - INFO - Refreshing access_token
2016-08-17 18:23:41,861 - INFO - Display format "json()".
2016-08-17 18:23:41,865 - INFO - access_token: <<REDACTED>
2016-08-17 18:23:41,971 - INFO - 127.0.0.1 - - [17/Aug/2016 18:23:41] "GET /computeMetadata/v1/instance/service-accounts/default/token HTTP/1.1" 200 -
2016-08-17 18:23:41,974 - INFO - Requesting numeric project_id: 
2016-08-17 18:23:41,974 - INFO - numeric-project-id not found, refreshing..
Your active configuration is: [default]

2016-08-17 18:23:41,988 - INFO - Display format "config json()".
2016-08-17 18:23:42,046 - INFO - Display format "
          table(
            projectId:sort=101,
            name,
            projectNumber
          )
         value(projectNumber)".
2016-08-17 18:23:43,255 - INFO - Returning numeric project_id:REDACTED

2016-08-17 18:23:43,256 - INFO - 127.0.0.1 - - [17/Aug/2016 18:23:43] "GET /computeMetadata/v1/project/numeric-project-id HTTP/1.1" 200 -
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 47882)
Traceback (most recent call last):
  File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock
    self.process_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
    self.finish_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/lib/python2.7/SocketServer.py", line 651, in __init__
    self.finish()
  File "/usr/lib/python2.7/SocketServer.py", line 710, in finish
    self.wfile.close()
  File "/usr/lib/python2.7/socket.py", line 279, in close
    self.flush()
  File "/usr/lib/python2.7/socket.py", line 303, in flush
    self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------

socat output

$ sudo socat -vvv TCP4-LISTEN:80,fork TCP4:127.0.0.1:18080

> 2016/08/17 18:23:40.598025  length=171 from=0 to=170
GET /computeMetadata/v1/instance/service-accounts/default/token HTTP/1.1\r
User-Agent: curl/7.35.0\r
Host: metadata.google.internal\r
Accept: */*\r
Metadata-Flavor: Google\r
\r

> 2016/08/17 18:23:40.614052  length=197 from=0 to=196
GET /computeMetadata/v1/project/numeric-project-id HTTP/1.1\r
Accept-Encoding: identity\r
Host: metadata.google.internal\r
Metadata-Flavor: Google\r
Connection: close\r
User-Agent: Python-urllib/2.7\r
\r
< 2016/08/17 18:23:41.972111  length=17 from=0 to=16
HTTP/1.0 200 OK\r
< 2016/08/17 18:23:41.972352  length=53 from=17 to=69
Content-Type: application/json\r
Content-Length: 144\r
< 2016/08/17 18:23:41.972568  length=57 from=70 to=126
Server: Metadata Server for VM\r
Metadata-Flavor: Google\r
< 2016/08/17 18:23:41.972752  length=39 from=127 to=165
Date: Thu, 18 Aug 2016 01:23:41 GMT\r
\r
< 2016/08/17 18:23:41.972920  length=144 from=166 to=309
{
  "access_token": "<<REDACTED>>",
  "expires_in": 3600,
  "token_type": "Bearer"
}

Scopes of access_tokens are not quite the same as GCE

When requesting to secretsmanger with an access_token I receive:

Error fetching SLACK_SIGNING_SECRET: {
  "error": {
    "code": 403,
    "message": "Request had insufficient authentication scopes.",
    "status": "PERMISSION_DENIED"
  }
}

Where as the same request in prod gets:

Error fetching SLACK_SIGNING_SECRET: {
  "error": {
    "code": 403,
    "message": "Permission 'secretmanager.versions.access' denied for resource 'projects/XXX/secrets/SlackSigningSecret/versions/latest' (or it may not exist).",
    "status": "PERMISSION_DENIED"
  }
}

So I guess there is something wrong with the access token creations from the service account credentials I supplied

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.