Why Writing 100 Lines of Code Might Be More Effective Than Just 2


βWriting 100 Lines of Code can be Better than just 2β. When I first listened to this line, I thought someone had gone crazy again, but after understanding the concept behind this statement, now I am also saying the same π
Before understanding this, letβs talk about parameters, based on which we will judge the code - is it good or bad? The parameters that come to my mind, how efficient code will work? How much computation is required to get the final answer? Based on this, letβs together try to judge the two different solutions to the same problem and decide which one is better π«€
Problem: Find the Nth Fibonacci Number
Fibonacci sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
Where:
F(0) = 0
F(1) = 1
F(n) = F(n-1) + F(n-2)
Naive Recursive Approach
function fib(n) {
if (n <= 1) return n;
return fib(n - 1) + fib(n - 2);
}
console.log(fib(10)); // Output: 55
The naive recursive approach has an exponential time complexity of O(2^n)
(Big O notation), which is manageable for small n
, like 10, but becomes extremely slow for larger n
, like 50, taking seconds or even minutes to compute.
Letβs see exactly why, if any approach has an exponential time complexity of O(2^n)
(Big O notation), then it is very slow for large values of n.
n | Steps (2βΏ) |
1 | 2 steps |
2 | 4 steps |
3 | 8 steps |
4 | 16 steps |
5 | 32 steps |
10 | 1024 steps |
20 | 1,048,576 steps |
50 | 1,125,899,906,842,624 steps π΅ (more than 1 trillion!) |
When n
becomes big, the steps explode (double every time).
Thatβs why O(2βΏ) is very slow for large n
.
Compare O(2βΏ) with O(n) (which is much better):
n | O(2βΏ) Steps | O(n) Steps |
5 | 32 steps | 5 steps |
10 | 1024 steps | 10 steps |
50 | 1 trillion+ | 50 steps |
What's happening internally?
- It repeatedly recalculates the same values, such as
fib(5)
calculatingfib(3)
multiple times.
Letβs visualize for fib(5)
:
fib(5)
ββ fib(4)
β ββ fib(3)
β β ββ fib(2)
β β ββ fib(1)
β ββ fib(2)
ββ fib(3)
ββ fib(2)
ββ fib(1)
See? fib(2)
and fib(3)
are calculated over and over.
That's wasted time and wasted CPU effort.
Optimized Using Dynamic Programming (Memoization(O(n)))
code:
function fibMemo(n, memo = {}) {
if (n <= 1) return n;
if (memo[n]) return memo[n]; // β
Return saved result
memo[n] = fibMemo(n - 1, memo) + fibMemo(n - 2, memo);
return memo[n];
}
Why is this better?
- Time complexity:
O(n)
β Linear time, calculating each Fibonacci number only once, allowing it to run instantly forn = 50
.
What's happening internally?
- It uses a memo object to save results, so when it calculates
fib(3)
, it saves it, and next time it needsfib(3)
, it simply fetches it from memory without recalculating.
Visualization for fibMemo(5)
:
fibMemo(5)
ββ fibMemo(4)
β ββ fibMemo(3)
β β ββ fibMemo(2)
β β ββ fibMemo(1) βοΈ cached next time
β ββ fibMemo(2) βοΈ cached
ββ fibMemo(3) βοΈ cached
All repeating calls are replaced with cached (βοΈ) results.
That's why it becomes super fast!
Side-by-Side Comparison:
Aspect | Naive Recursion | Memoization (DP) |
Time Complexity | O(2βΏ) β Very slow for big n | O(n) β Very fast, even for big n |
Space Complexity | O(n) (due to recursion stack) | O(n) (due to memoization) |
Recalculations | Yes (repeats work) | No (reuses cached results) |
Code Length | Short (looks simple) | Slightly longer (but smarter) |
So, what would you choose β a quick 2-line hack or a solid 100-line solution that stands the test of time? The answer shapes not just your code, but your career too.
Subscribe to my newsletter
Read articles from Durgesh Kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
