Why Writing 100 Lines of Code Might Be More Effective Than Just 2

Durgesh KumarDurgesh Kumar
4 min read

β€œWriting 100 Lines of Code can be Better than just 2”. When I first listened to this line, I thought someone had gone crazy again, but after understanding the concept behind this statement, now I am also saying the same 😁

Before understanding this, let’s talk about parameters, based on which we will judge the code - is it good or bad? The parameters that come to my mind, how efficient code will work? How much computation is required to get the final answer? Based on this, let’s together try to judge the two different solutions to the same problem and decide which one is better 🫀

Problem: Find the Nth Fibonacci Number

Fibonacci sequence:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...

Where:

  • F(0) = 0

  • F(1) = 1

  • F(n) = F(n-1) + F(n-2)

  1. Naive Recursive Approach

function fib(n) {
  if (n <= 1) return n;
  return fib(n - 1) + fib(n - 2);
}

console.log(fib(10));  // Output: 55

The naive recursive approach has an exponential time complexity of O(2^n) (Big O notation), which is manageable for small n, like 10, but becomes extremely slow for larger n, like 50, taking seconds or even minutes to compute.

Let’s see exactly why, if any approach has an exponential time complexity of O(2^n) (Big O notation), then it is very slow for large values of n.

nSteps (2ⁿ)
12 steps
24 steps
38 steps
416 steps
532 steps
101024 steps
201,048,576 steps
501,125,899,906,842,624 steps 😡 (more than 1 trillion!)

When n becomes big, the steps explode (double every time).
That’s why O(2ⁿ) is very slow for large n.

Compare O(2ⁿ) with O(n) (which is much better):

nO(2ⁿ) StepsO(n) Steps
532 steps5 steps
101024 steps10 steps
501 trillion+50 steps

What's happening internally?

  • It repeatedly recalculates the same values, such as fib(5) calculating fib(3) multiple times.

Let’s visualize for fib(5):

fib(5)
 β”œβ”€ fib(4)
 β”‚   β”œβ”€ fib(3)
 β”‚   β”‚   β”œβ”€ fib(2)
 β”‚   β”‚   └─ fib(1)
 β”‚   └─ fib(2)
 └─ fib(3)
     β”œβ”€ fib(2)
     └─ fib(1)

See? fib(2) and fib(3) are calculated over and over.
That's wasted time and wasted CPU effort.

  1. Optimized Using Dynamic Programming (Memoization(O(n)))

code:

function fibMemo(n, memo = {}) {
  if (n <= 1) return n;
  if (memo[n]) return memo[n];  // βœ… Return saved result

  memo[n] = fibMemo(n - 1, memo) + fibMemo(n - 2, memo);
  return memo[n];
}

Why is this better?

  • Time complexity: O(n) β†’ Linear time, calculating each Fibonacci number only once, allowing it to run instantly for n = 50.

What's happening internally?

  • It uses a memo object to save results, so when it calculates fib(3), it saves it, and next time it needs fib(3), it simply fetches it from memory without recalculating.

Visualization for fibMemo(5):

fibMemo(5)
 β”œβ”€ fibMemo(4)
 β”‚   β”œβ”€ fibMemo(3)
 β”‚   β”‚   β”œβ”€ fibMemo(2)
 β”‚   β”‚   └─ fibMemo(1) βœ”οΈ cached next time
 β”‚   └─ fibMemo(2) βœ”οΈ cached
 └─ fibMemo(3) βœ”οΈ cached

All repeating calls are replaced with cached (βœ”οΈ) results.
That's why it becomes super fast!

Side-by-Side Comparison:

AspectNaive RecursionMemoization (DP)
Time ComplexityO(2ⁿ) β†’ Very slow for big nO(n) β†’ Very fast, even for big n
Space ComplexityO(n) (due to recursion stack)O(n) (due to memoization)
RecalculationsYes (repeats work)No (reuses cached results)
Code LengthShort (looks simple)Slightly longer (but smarter)

So, what would you choose β€” a quick 2-line hack or a solid 100-line solution that stands the test of time? The answer shapes not just your code, but your career too.

0
Subscribe to my newsletter

Read articles from Durgesh Kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Durgesh Kumar
Durgesh Kumar