Architectural variation of hwx-pricing-engine using a grid of compute workers running on K8s.
It comprises of the following two components:
- compute-engine
- compute-manager
A simple client-server application written in Go. eodservice
simulates the client that submits valuation requests to a grid of remote compute engine (server) instances - valengine
. The valengine
responds to requests by doing static pricing compute using QuantLib library through a Java wrapper library - computelib
.
The application uses the Kite micro-services framework for RPC.
eodservice
breaks job submissions into batches with a max size of 100 (pricing requests) and the computelib
prices them in parallel.
The compute-engine's
docker image builds on top of a pre-baked image of the computelib
.
FROM amolthacker/mockcompute-base
LABEL maintainer="[email protected]"
# Paths
ENV COMPUTE_BASE /hwx-pe/compute/
# Copy lib and scripts
RUN mkdir -p $COMPUTE_BASE
ADD . $COMPUTE_BASE
RUN chmod +x $COMPUTE_BASE/compute.sh
RUN cp $COMPUTE_BASE/compute.sh /usr/local/bin/.
RUN cp $COMPUTE_BASE/mockvalengine-0.1.0.jar /usr/local/lib/.
# Start Engine
ENTRYPOINT go run $COMPUTE_BASE/valengine.go
# Ports : 6000 (RPC) | 8000 (HTTP-Health)
EXPOSE 6000 8000
A web based management user interface for the
compute-engine
that facilitates following operational tasks:
- Submission of valuation jobs to the compute engine
- Scaling of compute
- Aggregated compute engine log stream
The backend is written in Go and frontend with AngularJS
The demo below will:
- First go through deployment specifics
- Submit a bunch of compute jobs
- Watch the pods auto-scale out
- See the subsequent job submissions get balanced across the scaled-out compute engine grid
- See the compute engine grid scale back in after a period of reduced activity
Demo - https://youtu.be/OFkEZKlHIbg
The application uses following Go libraries:
- Kite micro-service framework
- Go client for K8s
- Configuration definition support
- Start
minikube
$ minikube start
-
Provisiong infrastructure
$ cd acs $ az group deployment create -n hwx-pe-k8s-grid-create -g k8s-pe-grid --template-file az-deploy.json --parameters @az-deploy.parameters.json
-
Point kubectl to this K8s cluster
$ az acs kubernetes get-credentials --ssh-key-file ~/.ssh/az --resource-group=k8s-pe-grid --name=containerservice-k8s-pe-grid
-
Start the proxy
$ nohup kubectl proxy 2>&1 < /dev/null &
-
Deploy the K8s cluster
$ kubectl apply -f k8s/
-
Deploy
compute-manager
Clone the repo and add utils under src/compute-manager/scripts/linux to PATH
-
Update the
config/env.toml
file to use the setup environment properties forcompute-engine
and Kubernetes -
Run the web server
$ cd ${path-to-compute-manager}
$ go run ComputeManager.go
-
Start the SSH tunnel
Update SSH config as under scripts/ssh-config $ ./scripts/hwxadmin-tunnel.sh start
-
Access the
compute-manager
Web UIhttp://localhost:8090/login
-
Stop the SSH tunnel
$ ./scripts/hwxadmin-tunnel.sh stop
-
Destroy the K8s cluster
$ kubectl delete -f k8s/
- Stop
minikube
$ minikube stop
- Teardown the AZ infrastructure
$ cd acs $ az group deployment create -n hwx-pe-k8s-grid-create -g k8s-pe-grid --template-file az-teardown.json