Redis Eviction Policies: A Friendly Guide to Approximate LRU

If you’ve ever used Redis, you know it’s lightning-fast. But like any in-memory data store, it has a big challenge: what happens when it runs out of memory? Redis has a clever way to handle this, it uses eviction policies to decide which keys to remove when memory is full. One of the most popular strategies is the Least Recently Used (LRU) algorithm. But here’s the twist: Redis doesn’t use the textbook version of LRU. Instead, it uses something called approximate LRU, this is something which I recently came through while reading in depth. In this article, I will break down what that means, how it works, and why Redis chose this approach.
What Are Redis Eviction Policies?
When Redis (or any other cache) hits its memory limit, it needs to free up space. That’s where eviction policies come in. These policies decide which keys get the boot. Here are the main ones Redis offers:
noeviction: Redis just says, “Nope, I’m full!” and throws an error when you try to add more data.
allkeys-lru: Redis removes the least recently used keys, no matter what.
volatile-lru: Redis only removes the least recently used keys that have an expiration time set.
allkeys-random: Redis randomly picks keys to evict.
volatile-random: Redis randomly picks keys to evict, but only among those with an expiration time.
volatile-ttl: Redis removes keys with the shortest time-to-live (TTL) first.
Out of these, LRU-based eviction (like allkeys-lru
and volatile-lru
) is a fan favorite because it makes sense: if a key hasn’t been used in a while, it’s probably safe to remove.
What’s LRU, Anyway?
The Least Recently Used (LRU) algorithm is a classic way to manage caches. The idea is simple: the keys you’ve used most recently are likely to be used again soon, while the ones you haven’t touched in a while are probably not needed. So, when memory is full, LRU evicts the key that was used the longest time ago.
In a perfect world, Redis would keep track of exactly when every key was last used. But here’s the problem: doing that requires a lot of memory and processing power. For a system like Redis, which is all about speed and efficiency, that’s not ideal.
Enter Approximate LRU: The Practical Solution
Instead of tracking every single key’s access time, Redis uses an approximate LRU algorithm. It’s like saying, “I’ll take a good guess instead of being 100% accurate.” Here’s how it works:
Sampling: When Redis needs to free up memory, it randomly picks a few keys (by default, 5 keys) from the pool.
Comparison: It checks which of these keys was used the longest time ago.
Eviction: That key gets evicted.
You can tweak how many keys Redis samples by adjusting the maxmemory-samples
setting. Sampling more keys makes the algorithm more accurate but also uses more CPU.
Example Time!
Let’s say Redis is set to sample 5 keys (maxmemory-samples 5
). When memory is full, Redis will:
Pick 5 keys at random.
Check which one was used the longest time ago.
Evict that key.
Why Does Redis Use Approximate LRU?
Redis is all about speed and efficiency. The theoretical LRU algorithm is accurate, but it comes with a cost: it uses more memory and CPU. Approximate LRU, on the other hand, is a great compromise:
Less Memory Usage: Redis doesn’t need to keep track of every key’s access time.
Faster Performance: Sampling a few keys is much quicker than maintaining a detailed access history.
Customizable: You can tweak the
maxmemory-samples
setting to balance accuracy and performance.
In real-world scenarios, the slight loss in accuracy is usually worth the performance gains. After all, Redis is designed to handle massive workloads, and every bit of efficiency counts.
How to Configure Approximate LRU in Redis
If you want to use approximate LRU in Redis, here’s what you need to do:
Set the eviction policy in your Redis configuration file. For example:
maxmemory-policy allkeys-lru
This tells Redis to use LRU eviction for all keys.
Optionally, adjust the
maxmemory-samples
setting to control how many keys Redis samples:maxmemory-samples 10
Increasing this number makes the algorithm more accurate but also uses more CPU.
Wrapping Up
Redis’s approximate LRU algorithm is a smart, practical solution for managing memory. It might not be perfect, but it’s fast, efficient, and works well for most use cases. By understanding how it works and how to configure it, you can make sure your Redis instance runs smoothly, even when memory is tight.
If you’re using Redis in production, play around with the maxmemory-samples
setting and see how it affects performance and eviction accuracy. A little tuning can go a long way!
Thanks
Gunish
Subscribe to my newsletter
Read articles from Gunish Matta directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Gunish Matta
Gunish Matta
I am a Software Engineer having 2+ years of experience who loves to build and tinker with systems, you can often find me trying to learn and build something outside of work. I have hands on experience with Java, Python,Spring Boot, Kafka, MySQL, Postgres, MongoDB, Docker, K8s, Airflow and AWS. Currently trying my hands on with PostgreSQL in depth.