Repo with assets to reproduce the talk
- crc setup
- crc config set enable-cluster-monitoring true
crc config view
- consent-telemetry : no
- cpus : 7
- enable-cluster-monitoring : true
- memory : 19000
- crc start --cpus 7
wget https://mirror.openshift.com/pub/openshift-v4/clients/serverless/latest/kn-linux-amd64.tar.gz -O my-kn.tar.gz
tar -xf my-kn.tar.gz
sudo mv kn-linux-amd64 /usr/local/bin/kn
-
Install serverless, power monitoring and opentelemetry operators by
kubectl apply -f yamls/operators.yaml
-
Install user workload monitoring (prometheus), serving and kepler instances by:
kubectl apply -f yamls/instances.yaml
Check:
oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'
-
Deploy grafana
- Simply run deploy-grafana.sh
-
Install istio with istioctl:
curl -L https://istio.io/downloadIstio | sh -
kubectl create ns istio-system
istioctl install --set profile=openshift --set values.global.proxy.holdApplicationUntilProxyStarts=true
-
Create namespaces and enable auto-injection
kubectl create ns serverless-ns
kubectl create ns serverfull-ns
kubectl label namespace serverfull-ns istio-injection=enabled --overwrite
kubectl get namespace -L istio-injection
-
Install hermes
helm repo add hermes-charts https://jgomezselles.github.io/hermes-charts
helm repo update charts/kn-hermes
- Make sure to check/delete previous runs/namespaces:
kubectl get ns serverfull-ns serverless-ns
- Install serverfull instance on its own namespace:
helm install serverfull -n serverfull-ns charts/kn-hermes/ -f charts/kn-hermes/serverfull_values.yaml
- Install serverless instance on its own namespace:
helm install serverless -n serverless-ns charts/kn-hermes/ -f charts/kn-hermes/serverless_values.yaml --set global.hermes.endpoint="serverless-mock.serverless-ns.svc.cluster.local"
helm delete -n serverfull-ns serverfull
helm delete -n serverless-ns serverless
istioctl uninstall --purge
--> WARNING! This is cluster scoped!kubectl delete ns istio-system serverfull-ns serverless-ns
- Delete grafana
oc delete -f yamls/serving.yaml
oc delete -f yamls/serverless-operator.yaml
h2load http://serverless-mock.hermes.svc.cluster.local:80/url/example/path/hello -m1 -n9000 -c3
avg(rate(hermes_requests_sent_total{id="post1"}[1m]))
kepler:kepler:container_joules_total:consumed:24h:by_ns{container_namespace=~"$namespace"} * $watt_per_second_to_kWh
docker build -f server-mock/docker/Dockerfile . -t ghcr.io/jgomezselles/kubecon24/server-mock:0.0.1 --progress plain --no-cache
- Find the best way to deploy the mock with a LoadBalancer in http/2
- Create dashboard
- See if we can save prometheus metrics to just reuse them
- Change commands to be kubectl
- Change Kepler to community version
- Record the demo
- Serverless:
- openshift-serverless
- knative-serving
- knative-serving-ingress
- operator
- Everything in the data path
- Gateway and pods
- Keep in mind that using local svc should be fine!
- https://knative.dev/docs/serving/autoscaling/autoscale-go/
- https://knative.dev/docs/serving/load-balancing/target-burst-capacity/
- with this you can enforce that Activator is always in path (if you set the value to โ-1โ). This way it should work that it select different backends even with only one connection.
- again in the oc get sks you can see if the Activator is in path (mode: Proxy) or not (mode: Serve)
- Export data https://docs.openshift.com/container-platform/4.15/monitoring/configuring-the-monitoring-stack.html#configuring_remote_write_storage_configuring-the-monitoring-stack
- Serverless official doc
- Power monitoring docs instructions
- Following: https://istio.io/latest/docs/setup/install/helm/
helm repo add istio https://istio-release.storage.googleapis.com/charts
kubectl create namespace istio-system
helm install istio-base istio/base -n istio-system --set defaultRevision=default
- Check
helm ls -n istio-system
is in STATUSdeployed
helm install istiod istio/istiod -n istio-system --wait
- Again, check
helm ls -n istio-system
is in STATUSdeployed