Grow within your enemy, then burst out.
Discouraged, google has since enabled python as a proper function engine, and this project now only serves as insight into the inner workings of google cloud functions.
- Docker
- Service Account json with
Google Cloud Functions Developer
role - Pipenv
- Project with a
flask
blueprint and an entry point calledcloud.py
import flask
import cloud_functions
flask_app = flask.Flask(__name__)
@flask_app.route('/')
def hello():
return "Hello cloud functions."
cloud_functions.register_http_trigger(flask_app)
FROM stormydragon/gcf-python
CMD ["--http", "--project=<my project name>", "--name=<trigger name>"]
pipenv install flask
docker build --tag my_cloud_function .
docker run --rm -it -v /path/to/service-account.json:/service-account.json:ro my_cloud_function
That's good too, simply pick a resource and event type from below and use code similar to this
you can find examples of the context and data in the invocation data.md
file.
cloud.py
import cloud_functions
import logging
logger = logging.getLogger(__name__)
def handler(context, data):
logger.info(context, data)
cloud_functions.register_event_trigger(handler)
Dockerfile
FROM stormydragon/gcf-python
CMD ["--event", "--project=<my project name>", "--name=<trigger name>", "--resource=<the resource>", "--event=<the event>"]
pipenv install flask
docker build --tag my_cloud_function .
docker run --rm -it -v /path/to/service-account.json:/service-account.json:ro my_cloud_function
The container will package your project during build and by running it with the specified arguments will deploy the package to Google Cloud Functions
Replace google cloud function node interpreter with arbitrary code Instead of using node as a shim, simply replace node.js and be master control program of the container.
The node worker script google_cloud_worker/worker.js
was built into the container by the cloud function deployment
machinery and is untouchable by our code until the /load
endpoint is called, this is triggered by a "cold start"
before the first connection is made. Our code usurps the node process and executes our own, in the process handing the
connected /load
and listening file descriptor to it, it is then that we send the answer to load, and
accept new connections on the listener.
Google cloud functions consists of a webserver that accepts three 'paths' as well as communication via http with a supervisor service.
X_GOOGLE_CODE_LOCATION
X_GOOGLE_ENTRY_POINT
X_GOOGLE_FUNCTION_TRIGGER_TYPE
-HTTP_TRIGGER
,,
X_GOOGLE_FUNCTION_NAME
X_GOOGLE_FUNCTION_TIMEOUT_SEC
X_GOOGLE_WORKER_PORT
- Web server portX_GOOGLE_SUPERVISOR_HOSTNAME
X_GOOGLE_SUPERVISOR_INTERNAL_PORT
MAX_LOG_LENGTH = 5000
MAX_LOG_BATCH_ENTRIES = 1500
MAX_LOG_BATCH_LENGTH = 150000
SUPERVISOR_KILL_TIMEOUT_MS = 500
SUPERVISOR_LOG_TIMEOUT_MS
/load
- Not used after node./check
- Heartbeat, must return 200 OK/execute
- POST - For execution of all non-http functions/execute/*
- For HTTP will accept arbitrary paths.
X-Google-Status
: one ofcrash
,load_error
,error
to indicate why the function died.
Accept logs and kill command from worker
_ah/kill
- Notify of our need to die._ah/log
{
"Entries": [
{
"TextPayLoad": "...",
"Severity": "INFO",
"Time": "2018-01-01T00:00:00.000Z"
}
]
}
Events from firebase don't wrap the context in a sepate field.
projects/<project name>
providers/firebase.auth/eventTypes/user.create
providers/firebase.auth/eventTypes/user.delete
projects/<project name>/databases/(default)/documents/<collection path>/{collectionId}
providers/cloud.firestore/eventTypes/document.create
providers/cloud.firestore/eventTypes/document.update
providers/cloud.firestore/eventTypes/document.delete
providers/cloud.firestore/eventTypes/document.write
projects/_/instances/<instance|project name>/refs/<path>/{key}
providers/google.firebase.database/eventTypes/ref.create
providers/google.firebase.database/eventTypes/ref.write
providers/google.firebase.database/eventTypes/ref.update
providers/google.firebase.database/eventTypes/ref.delete
projects/<project name>/topics/<topic name>
google.pubsub.topic.publish
projects/<project name>/buckets/<bucket name>
google.storage.object.finalize
google.storage.object.metadata_update
google.storage.object.delete
google.storage.object.archive