How to Cache Expensive Function Calls in Python — The Smart Way


Have you ever called a function repeatedly with the same inputs — knowing it does the same work every time? If yes, you’re wasting CPU cycles. Let’s fix that with caching.
In this post, we'll explore:
What caching is
Why it improves performance
How to implement it using
functools.lru_cache
A real-world use case
🚀 The Problem
Imagine you have a slow function that calculates the Nth Fibonacci number:
pythonCopyEditdef slow_fib(n):
if n <= 1:
return n
return slow_fib(n - 1) + slow_fib(n - 2)
This is painfully slow for larger n
due to redundant calculations.
pythonCopyEditimport time
start = time.time()
print(slow_fib(35))
print("Took", time.time() - start, "seconds")
⚡️ The Fix: Built-in Caching
Python has a built-in decorator for caching called lru_cache
in functools
.
Let’s use it:
pythonCopyEditfrom functools import lru_cache
@lru_cache(maxsize=None)
def fast_fib(n):
if n <= 1:
return n
return fast_fib(n - 1) + fast_fib(n - 2)
Try it now:
pythonCopyEditstart = time.time()
print(fast_fib(35))
print("Took", time.time() - start, "seconds")
⚡ Boom — instant result.
🧰 How lru_cache
Works
It remembers the output of a function call for each unique input.
maxsize
determines how many recent results are kept.It's perfect for pure functions (no side effects, same output for same input).
You can also inspect the cache stats:
pythonCopyEditprint(fast_fib.cache_info())
# CacheInfo(hits=33, misses=36, maxsize=None, currsize=36)
🛠 Real-World Use Case: API Calls with Expensive Parsing
Say you're calling an API that returns large JSON. You want to avoid reprocessing the same user twice:
pythonCopyEdit@lru_cache(maxsize=128)
def get_user_details(user_id):
response = requests.get(f"https://api.example.com/users/{user_id}")
return json.loads(response.text)
This saves you network time, especially during testing or re-runs.
⚠️ When Not to Use Caching
When data changes frequently (e.g., prices, stocks, real-time info)
When the function has side effects (e.g., writing to a file, database)
When inputs are large/unhashable (e.g., NumPy arrays or custom objects without
__hash__
)
🧵 Wrapping Up
Caching can boost performance dramatically with just one line of code. Python makes it easy, but use it wisely.
💬 Have you optimized something with lru_cache
or custom caching? Share your experience!
Subscribe to my newsletter
Read articles from Ashraful Islam Leon directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ashraful Islam Leon
Ashraful Islam Leon
Passionate Software Developer | Crafting clean code and elegant solutions