Time Complexity Over Line Count: Why 100 Lines May Run Faster Than 2

Imagine searching for a solution on stackoverflow you came across two solution one is of 2 lines another is 10 lines there is a great chance that you will pick the one with 2 lines. But wait that’s now always right.

Understanding Time Complexity

Time Complexity describes how much time the code will take to execute with increasing input size.

Say your code executes fine for 10 inputs but is slow for 1000 inputs; the time complexity will help you understand why.

Time Complexity can be categorised based on 3 scenarios:

  1. Best Case Scenario

  2. Average Case Scenario

  3. Worst Case Scenario

The best case, average case and worst case are denoted by Ω(Omega), θ(Theta) and O(Big O), respectively.

Time complexity can be Constant, Logarithmic, Linear, Linear Logarithmic, Quadratic, Exponential and Factorial, where constant means the least time and factorial is the maximum time.

A Simple Analogy

Imagine, while you are using Google Maps to navigate, sometimes maps suggest you take a longer route instead of a shortcut, and by doing so, you can save time since the shortcut has potholes, traffic lights, and traffic jams, which can increase travel time.

Similarly, in coding, your 2 lines of code may be forcing the machine to iterate multiple times, whereas the 10 lines of code can do it easily.

Code Example

Let’s consider a very common problem, “The Two Sum Problem”

Problem: Given an array of integers and a target value, find if there are two numbers that add up to the target.

The Brute Force Approach

#include <iostream>
using namespace std;

int main() {
    int arr[] = {2, 7, 11, 15};
    int target = 9, n = 4;

    for (int i = 0; i < n; i++)
        for (int j = i + 1; j < n; j++)
            if (arr[i] + arr[j] == target)
                cout << i << " " << j << endl;

    return 0;
}

This code looks short and easy to implement, but it’s slow during execution because here we are checking every possible pair of numbers in the array. It has two nested loops, making it inefficient for larger arrays.

Optimised Code

#include <iostream>
#include <unordered_map>
using namespace std;

int main() {
    int arr[] = {2, 7, 11, 15};
    int target = 9, n = 4;

    unordered_map<int, int> map; // stores number and its index

    for (int i = 0; i < n; i++) {
        int complement = target - arr[i];

        if (map.find(complement) != map.end()) {
            cout << map[complement] << " " << i << endl;
            break;
        }

        map[arr[i]] = i;
    }

    return 0;
}

This solution uses a hash map to remember the numbers we've already seen. For each new number, it checks if the complement (target - current number) is already in the map. This means we only need to loop through the array once, which makes it way faster than the brute-force approach.

Conclusion

In conclusion, when evaluating code, it's crucial to look beyond the number of lines and consider the time complexity. A shorter code snippet might seem appealing, but it can be less efficient, especially with larger inputs. Understanding and applying the principles of time complexity can lead to more efficient and scalable solutions.

1
Subscribe to my newsletter

Read articles from Ayush Kumar Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ayush Kumar Singh
Ayush Kumar Singh

I am a college student pursuing majors in Computer Applications with a keen interest in data science and machine leaning.