A ChatGPT bot for Kubernetes issues. Ask ChatGPT how to solve your Prometheus alerts, get pithy responses.
No more solving alerts alone in the darkness - the internet has your back.
Prometheus forwards alerts to the bot using a webhook receiver.
The bot asks ChatGPT how to fix your alerts.
You stockpile food in your pantry for the robot uprising.
The bot is implemented using Robusta.dev, an open source platform for responding to Prometheus alerts and Kubernetes events.
- A Slack workspace (for Teams/Discord support, please open an issue)
- Install Robusta with Helm
- Load the ChatGPT playbook. Add the following to
generated_values.yaml
:
playbookRepos:
chatgpt_robusta_actions:
url: "https://github.com/robusta-dev/kubernetes-chatgpt-bot.git"
customPlaybooks:
# Add the 'Ask ChatGPT' button to all Prometheus alerts
- triggers:
- on_prometheus_alert: {}
actions:
- chat_gpt_enricher: {}
- Add your ChatGPT API key to
generated_values.yaml
. Make sure you edit the existingglobalConfig
section, don't add a duplicate section.
globalConfig:
chat_gpt_token: YOUR KEY GOES HERE
-
Do a Helm upgrade to apply the new values:
helm upgrade robusta robusta/robusta --values=generated_values.yaml --set clusterName=<YOUR_CLUSTER_NAME>
-
Send your Prometheus alerts to Robusta. Alternatively, just use Robusta's bundled Prometheus stack.
Instead of waiting around for a real Prometheus alert, lets simulate a fake one.
- Choose any running pod in your cluster
- Use the robusta cli to trigger a fake alert on that pod:
robusta playbooks trigger prometheus_alert alert_name=KubePodCrashLooping namespace=<namespace> pod_name=<pod-name>
If you installed Robusta with default settings, you can trigger the alert on Prometheus itself like so:
robusta playbooks trigger prometheus_alert alert_name=KubePodCrashLooping namespace=default pod_name=prometheus-robusta-kube-prometheus-st-prometheus-0
Can ChatGPT give better answers if you feed it pod logs or the output of kubectl get events
?
Robusta already collects this data and attaches it to Prometheus alerts, so it should be easy to add. (But it should be disabled by default to avoid sending sensitive data to ChatGPT.)
PRs are welcome! We can probably get some easy improvements just via prompt engineering.
Feel free to use the following image or create your own.
Natan Yellin and Sid Palas livestreamed about this on YouTube - relevant part starts at 38:54