- Kubernetes cluster
- helm
-
Create a dedicated namespace for prometheus
kubectl create namespace monitoring
-
Add Prometheus helm chart repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
-
Update the helm chart repository
helm repo update helm repo list
-
Install the prometheus
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring
-
Above helm create all services as ClusterIP. To access Prometheus out side of the cluster, we should change the service type load balancer
kubectl edit svc prometheus-kube-prometheus-prometheus -n monitoring
-
Loginto Prometheus dashboard to monitor application https://:9090
-
Check for node_load15 executor to check cluster monitoring
-
We check similar graphs in the Grafana dashboard itself. for that, we should change the service type of Grafana to LoadBalancer
kubectl edit svc prometheus-grafana
-
To login to Grafana account, use the below username and password
username: admin password: prom-operator
-
Here we should check for "Node Exporter/USE method/Node" and "Node Exporter/USE method/Cluter" USE - Utilization, Saturation, Errors
-
Even we can check the behavior of each pod, node, and cluster
-
Create an slack account
-
Create a slack channel called "demo-alerts"
-
Install app called "incomming webooks". once installed it give a URL. Use the URL
-
Copy below test along with URL
curl -X POST --data-urlencode "payload={\"channel\": \"#demo-alerts\", \"username\": \"webhookbot\", \"text\": \"This is posted to #demo-alerts and comes from a bot named webhookbot.\", \"icon_emoji\": \":ghost:\"}" <Webhook_URL> Example: curl -X POST --data-urlencode "payload={\"channel\": \"#demo-alerts\", \"username\": \"webhookbot\", \"text\": \"This is posted to #demo-alerts and comes from a bot named webhookbot.\", \"icon_emoji\": \":ghost:\"}" https://hooks.slack.com/services/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
-
Now find out secret called "alertmanager-monitoring-kube-prometheus-alertmanager" and delete it
kubectl get secret -n monitoring kubectl delete secret -n monitoring alertmanager-my-kube-prometheus-stack-alertmanager
-
Now create a configuration file with name 'alertmanager.yaml' and copy content from here
Note: make usre you have updated slack_web_url with the whebook url of slack
-
Create a secret with an updated alertmanager.yaml file
kubectl create secret generic --from-file=alertmanager.yaml alertmanager-my-kube-prometheus-stack-alertmanager -n monitoring
-
Look for the password of grafana
get secret --namespace default kubectl get secret --namespace default prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo