Redis Sentinel Made Simple: Hands-On High Availability


High availability is no longer a luxury — it’s a survival kit for modern applications. Databases crash, servers die, containers get killed (sometimes by accident, sometimes by design). In the world of Redis, Sentinel is the quiet guardian that keeps your cache cluster alive when chaos happens.
In this article, I’ll walk you through Redis Sentinel step by step, with a runnable Docker demo and a Spring Boot integration example. By the end, you’ll see failover happening live — and how your application can recover without manual intervention.
1. Introduction
Why does Redis Sentinel matter?
Picture this: you’ve got Redis set up with one master and a couple of replicas. Everything’s smooth… until the master suddenly crashes. Now what? Who decides which replica should take over? Who makes sure your clients know where to connect? 👉 That’s exactly the job Sentinel handles for you.
Monitors your Redis instances.
Notifies you when something goes wrong.
Automatically promotes a replica to master.
Redirects clients to the new master.
Sentinel is the difference between a cache outage and a smooth failover.
2. What is Redis Sentinel?
At its core, Redis Sentinel is a distributed system that provides:
Monitoring – constantly checking whether your master and replicas are alive.
Notification – alerting operators (or systems) when something goes wrong.
Automatic Failover – promoting a replica when the master is unavailable.
Client Redirection – letting apps connect to the new master automatically.
3 . Sentinel Architecture
A Sentinel deployment usually includes multiple Sentinel nodes plus your Redis master and replicas. Sentinels work together, reaching quorum before deciding a master is truly dead.
Key concepts:
SDOWN (Subjectively Down): One Sentinel thinks the master is down.
ODOWN (Objectively Down): Enough Sentinels agree the master is down.
Replica Priority: Determines which replica should be promoted first.
Deployment Diagram
+-------------------+ +-------------------+
| Sentinel #1 | | Sentinel #2 |
+-------------------+ +-------------------+
\ /
\ /
\ Quorum Vote /
\ /
+-------------------+
| Sentinel #3 |
+-------------------+
|
v
+-------------------+
| Redis Master |
+-------------------+
/ \
v v
+----------------+ +----------------+
| Redis Replica1 | | Redis Replica2 |
+----------------+ +----------------+
4. Setting Up Redis Sentinel
We use Docker Compose with one master, two replicas, and three Sentinels.
Redis Sentinel Config
sentinel announce-ip "127.0.0.1"
sentinel announce-port 26379
# sentinel with version above 6.2 can resolve host names, but this is not enabled by default.
sentinel resolve-hostnames yes
# Monitor master named "mymaster" at 127.0.0.1(or domain name):6379 with quorum of 2
sentinel monitor mymaster 127.0.0.1 6379 2
# Master is considered down after 5 seconds of no response
sentinel down-after-milliseconds mymaster 5000
# Failover timeout 18 seconds
sentinel failover-timeout mymaster 18000
##Below line 'Generated by CONFIG REWRITE 'controlled by Redis Sentinel(Config file should be writable)
# Generated by CONFIG REWRITE
Ways to Run Sentinel:
redis-sentinel /etc/redis/sentinel.conf
# or
redis-server /etc/redis/sentinel.conf --sentinel
Redis CLI Useful commands:
#Start Sentinel's monitoring.
SENTINEL MONITOR <master name>
#Stop Sentinel's monitoring.
SENTINEL REMOVE <master name>
#Set Sentinel's monitoring configuration.
SENTINEL SET <master name> <option> <value>
#(>= 5.0) Show a list of replicas for this master, and their state.
SENTINEL REPLICAS <master name>
#Show a list of sentinel instances for this master, and their state.
SENTINEL SENTINELS <master name>
#Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels
#(however a new version of the configuration will be published so that the other Sentinels will update their configurations.
#That's called 'Configuration propagation'
SENTINEL FAILOVER <master name>
#Display information by Role.
INFO
Docker Compse :
redis-sentinel-1:
image: bitnami/redis-sentinel:8.0.3
container_name: redis-sentinel-1
ports:
# Sentinel, Docker, NAT, and possible issues. Set port-mapping 1:1
- "26379:26379"
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
# Use with caution regarding permissions.
- redis-sentinel-1-data:/bitnami/redis-sentinel
- ./redis-sentinel-1:/usr/local/etc/redis-sentinel
# Sentinel, Docker, NAT, and possible issues. Use host for maximum compatibility.
network_mode: host
depends_on:
- redis-master
- redis-replica-1
- redis-replica-2
restart: unless-stopped
command: ["redis-sentinel", "/usr/local/etc/redis-sentinel/sentinel.conf"]
5. Redis Docker Demo
Clone the demo project:
git clone https://github.com/arata-x/redis-ha.git
Docker Setup/Run
cd redis-ha/docker/redis/sentinel
docker-compose up
Simulate master crash:
docker kill redis-master
The Sentinels will detect the failure and promote a replica to do the Failover.
6. Spring Boot Integration
Spring Boot supports Sentinel natively via spring-boot-starter-data-redis
. Here’s how to configure it.
pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis-reactive</artifactId>
</dependency>
application.yml
spring:
data:
redis:
sentinel:
master: localhost
nodes:
- redis-sentinel-1:26379
- redis-sentinel-2:26379
- redis-sentinel-3:26379
Spring Boot Config for Pub/Sub messages(Optional)
@Bean(destroyMethod = "shutdown")
public RedisClient sentinelClient() {
return RedisClient.create("redis://127.0.0.1:26379");
}
@Bean(destroyMethod = "close")
public StatefulRedisPubSubConnection<String, String> sentinelPubSub(RedisClient client) {
var conn = client.connectPubSub();
conn.addListener(new RedisPubSubAdapter<>() {
@Override public void message(String channel, String message) {
log.info("Sentinel event [{}] {}", channel, message);
}
});
// subscribe to key Sentinel events (or use psubscribe("*") to get all)
conn.sync().subscribe(
"+switch-master", // master changed
"+sdown", "-sdown", // subjective down / cleared
"+odown", "-odown", // objective down / cleared (masters only)
"+try-failover",
"+failover-state-*"
);
return conn;
}
This way, clients automatically reconnect after failover. And log Sentinel events.
7. Testing Failover & Logs
Failover Timeline
t0: Master alive
t1: Master killed -> SDOWN
t2: Quorum reached -> ODOWN
t3: Leader elected -> VOTE
t4: Master elected -> PROMOTE
t5: New master active -> CLIENTS REDIRECT
t6: Replica detcted -> SLAVE
t7: Old master back -> SLAVE
Docker logs
redis-sentinel-1 | 1:X 24 Aug 2025 01:29:56.652 * Sentinel ID is 45f2090cc345fd2a0a9afad89d45d3c212816390
redis-sentinel-3 | 1:X 24 Aug 2025 01:29:56.670 * Sentinel ID is 72098a7942ff006106511dbb0db3044b00fa5473
redis-sentinel-2 | 1:X 24 Aug 2025 01:29:56.690 * Sentinel ID is b87c2be6edf6192e03783f1ed1647af7fa2b51f6
# Simulate the master down via command 'docker container kill redis-master' and the Failover will start.
redis-sentinel-1 | 1:X 24 Aug 2025 01:30:32.047 # +sdown master mymaster redis-master 6379
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:32.067 # +sdown master mymaster redis-master 6379
redis-sentinel-3 | 1:X 24 Aug 2025 01:30:32.107 # +sdown master mymaster redis-master 6379
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:32.144 # +odown master mymaster redis-master 6379 #quorum 2/2
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:32.144 # +try-failover master mymaster redis-master 6379
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:32.151 # +vote-for-leader b87c2be6edf6192e03783f1ed1647af7fa2b51f6 1
redis-sentinel-3 | 1:X 24 Aug 2025 01:30:32.166 # +vote-for-leader b87c2be6edf6192e03783f1ed1647af7fa2b51f6 1
redis-sentinel-1 | 1:X 24 Aug 2025 01:30:32.167 # +vote-for-leader b87c2be6edf6192e03783f1ed1647af7fa2b51f6 1
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:33.215 # +promoted-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster redis-master 6379
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:32.244 # +elected-leader master mymaster redis-master 6379
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:32.244 # +failover-state-select-slave master mymaster redis-master 6379
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:32.299 # +selected-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster redis-master 6379
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:32.299 * +failover-state-send-slaveof-noone slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster redis-master 6379
redis-sentinel-3 | 1:X 24 Aug 2025 01:30:33.263 # +switch-master mymaster redis-master 6379 127.0.0.1 6381
redis-sentinel-3 | 1:X 24 Aug 2025 01:30:33.264 * +slave slave redis-master:6379 redis-master 6379 @ mymaster 127.0.0.1 6381
# Restore master via command 'docker container start redis-master' and master will be the replica.
redis-sentinel-2 | 1:X 24 Aug 2025 01:30:34.096 # -sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6380
redis-master | 1:S 24 Aug 2025 01:30:34.236 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
redis-master | 1:S 24 Aug 2025 01:30:34.236 * Connecting to MASTER 127.0.0.1:6380
redis-sentinel-1 | 1:X 24 Aug 2025 01:30:34.236 * +convert-to-slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6380
redis-replica-1 | 1:M 24 Aug 2025 01:30:34.447 * Synchronization with replica 127.0.0.1:6379 succeeded
redis-master | 1:S 24 Aug 2025 01:30:34.447 * MASTER <-> REPLICA sync: Successfully streamed replication buffer into the db (0 bytes in total)
Redis Event List
+slave -- A new replica was detected and attached.
+sdown -- The specified instance is now in Subjectively Down state.
+odown -- The specified instance is now in Objectively Down state.
+try-failover -- New failover in progress, waiting to be elected by the majority.
+elected-leader -- Won the election for the specified epoch, can do the failover.
+failover-state-select-slave -- New failover state is select-slave: we are trying to find a suitable replica for promotion.
Spring Boot log by Redis pub/sub
2025-08-24T01:34:46.946+08:00 INFO 44256 --- [redis-reactive-demo] [ioEventLoop-7-1] d.a.redis.config.RedisConfigSentinel : Sentinel event [+sdown] master mymaster 127.0.0.1 6379
2025-08-24T01:34:48.055+08:00 INFO 44256 --- [redis-reactive-demo] [ioEventLoop-7-1] d.a.redis.config.RedisConfigSentinel : Sentinel event [+odown] master mymaster 127.0.0.1 6379 #quorum 3/2
2025-08-24T01:34:48.176+08:00 INFO 44256 --- [redis-reactive-demo] [ioEventLoop-7-1] d.a.redis.config.RedisConfigSentinel : Sentinel event [+switch-master] mymaster 127.0.0.1 6379 127.0.0.1 6381
8. Best Practices
Run at least 3 Sentinels.
Distribute Sentinels across nodes for resilience.
Tune
failover-timeout
anddown-after-milliseconds
.
10. Final Thoughts
🚦 Think of Redis Sentinel as your system’s insurance policy. Most of the time, you’ll never notice it quietly standing guard in the background. But the moment your master node takes a dive, Sentinel steps in to keep traffic flowing — and you’ll be very glad it was there all along.
👉 Use Sentinel when you want simple, lightweight high availability. It doesn’t complicate your setup and gets the job done for most HA needs.
⚡ But if your workload demands both horizontal scaling (sharding) and HA, that’s where Redis Cluster shines. Sentinel won’t replace Cluster — they solve different problems.
🔗Demo project: Redis Sentinel
Subscribe to my newsletter
Read articles from Arata directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Arata
Arata
Write code. Delete code. Mostly, I watch systems breathe.