Git Product home page Git Product logo

hasura-k8s-stack's Introduction

hasura-k8s-stack

A feature-complete Hasura stack on Kubernetes.

Components

  • Postgres (For production use cases, it is recommended to have a managed/highly available Postgres instance)
  • Hasura GraphQL Engine
  • Nginx for Ingress
  • Cert Manager for auto SSL with Let's Encrypt
  • Remote Schema with Express.js and GraphQL.js
  • Event Triggers with Express.js

Architecture

architecture

Setting up

This guide is written with the assumption that the user is well versed with Kubernetes and the user has a Kubernetes cluster with enough resources ready for consumption.

Postgres

Postgres is the primary datastore and is created as a Kubernetes Deployment backed by a Persistent Volume. This is only intended for a development setup and should not be used in a production scenario. If there is no first-class storage support, Postgres should be outside the Kubernetes cluster.

A Kubernetes Service object is created to direct traffic to Postgres pod in this Deployment.

Kubernetes Secrets are used to store the Postgres username, password etc. Actual secret files should never be committed to the repo.

Installation

cd postgres

# copy the secret.yaml file
cp secret.yaml secret.prod.yaml

# edit secret.prof.yaml and change username, password, dbname
vim secret.prod.yaml

# create the secret
kubectl apply -f secret.prod.yaml

# create the PVC
kubectl apply -f pvc.yaml

# create deployment and service
kubectl apply -f deployment-service.yaml

Once these components are successfully created, Postgres will be available at postgres://postgres:5432 on the Kubernetes cluster in the default namespace.

Hasura GraphQL Engine

Hasura GraphQL Engine is deployed as a Kubernetes Deployment along with a Service object to load balance traffic to multiple pods. The default deployment launches one instance of GraphQL Engine connected to the Postgres DB provisioned earlier.

Installation

cd hasura

# copy secret.yaml
cp secret.yaml secret.prod.yaml

# edit secret.prod.yaml and add an admin secret (access key) and db url
vim secret.prod.yaml

# create the secret
kubectl apply -f secret.prod.yaml

# create the deployment and service
kubectl apply -f deployment-service.yaml

Hasura should be available as http://hasura:80 inside the cluster. This service can be publicly exposed with an ingress rule and we'll explore it in the ingress section.

Scaling

Hasura can be horizontally scaled without any side-effects. Just increase the number of replicas for the Kubernetes deployment. Make sure that there is enough CPU/RAM available for the new replicas.

kubectl scale deployment/hasura --replicas 3

Migrations

Hasura can keep track of the database and metadata changes and store them as declarative files so that it can be version controlled. It is a flexible system that let's you write migrations by hand or it can auto-generate migrations when you use the console.

To use migrations, install the Hasura CLI - instructions are in the docs.

Once CLI is installed, open the console using CLI.

cd hasura

# open console
hasura console --endpoint <hasura-endpoint> --access-key <hasura-access-key>

As and when you use the console to make changes, CLI will write migration files (yaml) to the migrations directory.

Read more about migrations.

The same migrations can then be applied on another Hasura instance:

cd hasura

# apply migrations on another instance
hasura migrate apply --endpoint <another-hasura-endpoint> --access-key <access-key>

Until PR#1574 is merged, it is recommended to scale the replicas back to one to apply migrations and then scale them back up again.

Nginx Ingress

Nginx Ingress Controller let's us define ingress rules and expose services running in the cluster on an external domain. Behind the scenes, it is an Nginx container which can be configured using Ingress objects to add specific routing rules. It can also do SSL termination which we will be using along with cert manager.

Installation

cd nginx-ingress

# create namespace, configmaps and deployment
kubectl apply -f mandatory.yaml

# create the loadbalancer
kubectl apply -f cloud-generic.yaml

Ingress resource for Hasura

Now that the Ingress controller is created, we can create an Ingress object to route external traffic to our Hasura container.

Before that, we need to configure a domain and add the load balancer's IP address to the domain's DNS records.

# get load balancer ip
kubectl -n ingress-nginx get service

NAME            TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.0.162.204   52.172.9.111   80:31186/TCP,443:30609/TCP   30h

# copy the EXTERNAL-IP

Once you have the EXTERNAL-IP from the output above, add an A record for your domain from the DNS dashboard.

We'll use the same domain in our ingress configuration.

You can check the status by checking if the address is assigned. Once it is available you can go to the domain and it should load the Hasura console.

Cert Manager

Cert Manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources. We'll use it to provision certificates automatically from Let's Encrypt.

This step is optional if you have already bought certificates from another vendor as it can be configured directly with the Ingress controller.

Installation

cd cert-manager

# create the namespace
kubectl apply -f namespace.yaml

# create crds
kubectl apply -f 00-crds.yaml

# create the cert manager resources
kubectl apply -f cert-manager.yaml

# create letsencrypt staging and prod issuers
kubectl apply -f le-staging-issuer.yaml
kubectl apply -f le-prod-issuer.yaml

Once the manager starts running, it will contact the Let's Encrypt staging server and issues a fake certificate. This is to make sure that misconfigurations will not lead to hitting rate limits on the prod server.

For this to happen, let's create the Ingress resource.

cd hasura

# edit ingress.yaml and replace k8s-stack.hasura.app with your domain
vim ingress.yaml

# create the ingress resource
kubectl apply -f ingress.yaml

Depending on the load balancer and the networking plugin, it will take couple of minutes for the configuration to be active.

# check the status of ingress
kubectl get ingress

NAME     HOSTS                  ADDRESS        PORTS     AGE
hasura   k8s-stack.hasura.app   52.172.9.111   80, 443   30h

Once the Ingress resource is created, cert-manager should be triggered and it will start the certificate issuance process. You can check the status using the following command:

kubectl get certificate

Checkout of you're getting a SSL certificate for the domain (it will be invalid as it is a fake CA). If everything is alright, edit the ingress resource to use the prod issuer.

# open the ingress object in an editor
kubectl edit ingress hasura

# replace letsencrypt-statging to letsencrypt-prod
#    certmanager.k8s.io/issuer: letsencrypt-prod

# save and exit

# delete the certificate that is already issue to trigger a cert issuance 
# from prod server
kubectl delete certificate <cert-name-from-get-command-above>

The domain should have a proper SSL certificate once the issuance is completed.

Event Triggers

Event triggers can be used to trigger webhooks on database events like insert, update and delete. This is typically useful for executing asynchronous business logic, like sending emails, updating a search index etc. In this stack we are using a Node.js microservice written using Express.js which exposes our webhooks. Many community contributed boilerplates are available which includes serverless functions also.

Installation

cd event-tiggers

# create kubernetes deployment and service
kubectl apply -f k8s.yaml

Once the container has started, the triggers will be available at http://event-triggers from within the cluster, there is an echo trigger that is already setup at http://event-triggers/echo.

Remote Schema

To custom business logic that is synchronous in nature, you can write a dedicated GraphQL server in any preferred language and expose the queries and mutations from that server through Hasura's GraphQL API using Remote schema feature. The stack here includes a GraphQL server written in Node.js using Express.js and GraphQL.js with a sample Hello World schema.

Installation

cd remote-schema

# create kubernetes deployment and service
kubectl apply -f k8s.yaml

The GraphQL server should be available at http://remote-schema/graphql from within the cluster.

TODO

  • Using tools like Kustomize to make deploying easier.
  • Setting up CI/CD scripts for migrations and environment promotion.
  • Docs for auth integration.

A big shoutout to the folks at appsintegra for collaborating with Hasura on this repo.

hasura-k8s-stack's People

Contributors

dsandip avatar sachaarbonel avatar shahidhk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hasura-k8s-stack's Issues

having trouble with cert-manager

I was following this tutorial: https://www.youtube.com/watch?v=APnXlMiKBWg&list=WL&index=30&t=0s
everything works until the cert thing, the presenter was mentioning that the order of executing matters (apply the certs first THEN ingress) which I did. I keep getting errors though, I pasted below, I tried googling around but I am not familiar enough with kubernetes to navigate the issue.

Please advise :)

cert-manager % kubectl describe clusterissuer,certificate,order,challenge
Name:         letsencrypt-prod
Namespace:
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":...
API Version:  certmanager.k8s.io/v1alpha1
Kind:         ClusterIssuer
Metadata:
  Creation Timestamp:  2019-11-30T09:27:11Z
  Generation:          2
  Resource Version:    86328
  Self Link:           /apis/certmanager.k8s.io/v1alpha1/clusterissuers/letsencrypt-prod
  UID:                 963baf6b-1353-11ea-94ad-42010a8a0173
Spec:
  Acme:
    Email:  [email protected]
    http01:
    Private Key Secret Ref:
      Key:
      Name:  letsencrypt-prod
    Server:  https://acme-v02.api.letsencrypt.org/directory
Status:
  Acme:
    Uri:
  Conditions:
    Last Transition Time:  2019-11-30T09:27:11Z
    Message:               Failed to verify ACME account: acme: urn:ietf:params:acme:error:rateLimited: Your ACME client is too old. Please upgrade to a newer version.
    Reason:                ErrRegisterACMEAccount
    Status:                False
    Type:                  Ready
Events:
  Type     Reason                Age                From          Message
  ----     ------                ----               ----          -------
  Warning  ErrVerifyACMEAccount  15m (x3 over 15m)  cert-manager  Failed to verify ACME account: acme: urn:ietf:params:acme:error:rateLimited: Your ACME client is too old. Please upgrade to a newer version.

Question about dns, NodePort and remote schema

Hi guys, would it be theoretically possible to connect a hasura instance to a remote schema service without exposing it or knowing its ip adress? Via remoteschema.svc.cluster.local for example. The main purpose would be to add the migration step to the manifests

Rollback mechanism

How can we couple hasura rollback mechanism with kubernetes one ?

Example : I use the cli-image to have automated migrations. I bound sql migrations files and metadata yaml in configMaps.

Now what if a migration should be rollbacked? : Documentation talks about down.sql files but to do the rollback we have to use the hasura-cli tool.

So the kubernetes rollback won't solve the problem here.

Any suggestions ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.