Day 43 of 90 Days of DevOps Challenge: Setting up Prometheus Alertmanager


Yesterday, I built customized Grafana dashboards that helped visualize key Kubernetes metrics using PromQL, a huge leap in observability.
But observability is incomplete without a way to get notified when things go wrong.
That’s where Alertmanager steps in.
What is Alertmanager?
While Prometheus is great at collecting metrics and evaluating alerting rules, it doesn’t handle notification delivery on its own.
That responsibility belongs to Alertmanager.
Here’s what Alertmanager does:
Deduplicates alerts (to prevent spam)
Groups related alerts (e.g., multiple alerts from the same pod)
Routes alerts to your desired destination, Email, Slack, PagerDuty, Opsgenie, and more
Silences alerts temporarily during known outages or maintenance
Inhibits alerts (e.g., suppress low-priority alerts if a critical alert exists)
This makes it a critical component in any production-grade monitoring pipeline.
Installing and Configuring Alertmanager
If you’re using Prometheus Operator or the kube-Prometheus-stack Helm chart, Alertmanager is often installed by default.
Sample Alertmanager Configuration
Below is a minimal setup to send alerts to Slack:
route:
receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
slack_configs:
- api_url: 'https://hooks.slack.com/services/XXX/YYY/ZZZ'
channel: '#devops-alerts'
This YAML defines:
A route: Directing all alerts to the
slack-notifications
receiverA receiver: With a
slack_config
block pointing to the Slack webhook URL
NOTE: Replace XXX/YYY/ZZZ
with your actual Slack webhook ID
Deploying Alertmanager Config
Depending on how you installed Prometheus:
Option 1: Using ConfigMap (Kubernetes Native)
kubectl create configmap alertmanager-config \
--from-file=alertmanager.yml \
-n monitoring
Then update your Alertmanager StatefulSet or Pod to mount this ConfigMap.
Option 2: Using Helm
If using the kube-prometheus-stack, you can provide configuration in your values.yaml
:
alertmanager:
config:
route:
receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
slack_configs:
- api_url: 'https://hooks.slack.com/services/XXX'
channel: '#devops-alerts'
Then run:
helm upgrade --install prometheus prometheus-community/kube-prometheus-stack -f values.yaml
Advanced Alert Routing
Alertmanager supports routing trees that route based on:
Severity (
warning
,critical
)Cluster or environment (
prod
,dev
)App labels (e.g.,
app=kube-apiserver
)
Example:
route:
group_by: ['alertname']
group_wait: 10s
group_interval: 5m
repeat_interval: 3h
routes:
- match:
severity: 'critical'
receiver: 'pagerduty'
- match:
severity: 'warning'
receiver: 'slack-notifications'
This setup:
Sends critical alerts to PagerDuty
Sends warnings to Slack
Groups and delay alerts to avoid flapping
Silence and Inhibition
Sometimes you want to mute alerts temporarily, during maintenance windows, for example.
Silencing Alerts
Visit the Alertmanager UI (check your Grafana or Prometheus dashboard for the link)
Click Silences → Create Silence
Define matchers and time windows
Inhibit Rules
You can suppress low-priority alerts if a high-priority one is active (e.g., suppress "Disk usage 70%" when "Disk full" is active).
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'instance']
Final Thoughts
With Alertmanager now in place, my observability pipeline is complete, Prometheus handles metrics collection and alert rules, Grafana provides rich visual dashboards, and Alertmanager ensures real-time notifications when issues arise. This setup transforms raw monitoring data into actionable insights, enabling proactive infrastructure management.
As I move forward in my #90DaysOfDevOps journey, I’ll shift focus from metrics to centralized logging by exploring powerful logging stacks like EFK (Elasticsearch, Fluentd, Kibana) and ELK (Elasticsearch, Logstash, Kibana), adding deeper visibility into system and application logs to complete the observability picture.
Stay tuned as we bring log data into the observability equation!
Subscribe to my newsletter
Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
