Understanding Time Complexity

dheeraj↗️dheeraj↗️
7 min read

Suppose there is a running race ,and there are 6 runners participating in it , to find out the individuals time taken to reach the finish line we use a stop watch to record their times and the person who crosses the finish line in the least amount of time is considered as the winner among all .

Similarly if two individuals write programs using different methods to accomplish the same task . Among both individuals how do we consider who wrote the best program ?

For this we have Time complexity which helps us measure how fast or slow a program performs a task , it gives an estimate of execution time based on the input size.

While Running a program we need to find three cases :

Best case : denoted by the symbol —→ Ω

Average case : denoted by the symbol —→ θ

Worst case : denoted by the symbol —→ O (big O)

Real world example of best, avg and worst case :

Best case : finding a person name ex: “john“ in first page of telephone directory.

Average case : finding the person name some where in the middle.

Worst case : finding the person name some where at last pages or name not present in it .

Another simple example of best, avg and worst case :

Best case : a sorted array with values : [1,2,3,4,5] .

Average case : values needed to be interchanged : [1,3,2,5,4] .

Worst case : values in reverse order : [5,4,3,2,1] .

The Big O notation :

By default, we use Big O notation to show the worst-case scenario for an algorithm. This means we look at the maximum time the program could need, so we know it won’t be slower than this / it wont take more time than this , no matter how large / small the input size is .

Time Complexity Types : O(1) to O(2ⁿ).

1 - Constant time complexity —> O(1).

The algorithm’s execution time remains the same, regardless of the input size.

In the below example we are assuming that each operation takes 1 sec


int main() {
    int x, y, z;        // 3 seconds (variable declarations)
    z = x + y;          // 2 seconds (1 sec for addition, 1 sec for assignment)
    cout << z;          // 1 second (printing output)
    return 0;           // 1 second (return statement)
}
  • Total Time Taken: 7 seconds .

  • Reason:
    Each operation (declaration, addition, assignment, output, return) takes a fixed amount of time, regardless of the values or the "size" of the input.

Time Complexity of the above program :

Time Complexity: O(1) (Constant Time)

  • Why?
    The number of steps and the total execution time do not increase with the size of the input. Whether the numbers are small or large, the program always performs the same set of operations.

2 - Linear time complexity —> O(n).

The running time increases proportionally with the input size

example :


int main() {
    int x, y, z, n;      // O(1): Variable declarations
    cin >> n;            // O(1): Input for n
    z = x + y;           // O(1): Addition and assignment

    int a = 0;           // O(1): Initialize a.

    for(int i = 0; i < n; i++) { // O(n): Loop runs n times
        a++;             // O(n) total
    }

    return 0;            // O(1): Return statement
}

//O(1)+ O(1)+ O(1)+ O(1)+ O(n)+ O(n)+ O(1)
//When adding up time complexities, we focus on the dominant term as input size grows.
// Here, the loop O(n) dominates all the constant time operations.

Time Complexity of the above program :

Time Complexity: O(n) (Linear Time)

Why ?

Even though there are several constant time O(1) operations, the presence of the O(n) loop means the program’s running time increases linearly with the input size n. Thus, the time complexity is O(n).


3 - Logarithmic time complexity —> O(log n).

The algorithm’s running time grows slowly as the input size increases, often by dividing the problem in each step (e.g., binary search).

example :



int main() {

    int a, b, c;                  // O(1): Variable declarations
    a = b + c;                    // O(1): Addition and assignment

    for(int i = 1; i < n; i = i * 2){ // O(log n): Loop variable doubles each time
        b++;                      // O(log n) 
    }

    return 0;                     // O(1): Return statement
}

Time Complexity of the above program :

Time Complexity: O( log n) (Logarithmic time)

Why ?

The loop variable i doubles each iteration, so the number of iterations is proportional to the logarithm of n (base 2)

The dominating factor among all is O(log n).


4 -Linear Logarithmic time complexity —> O(n log n).

It is combination of Linear and Logarithmic time

int main() {

    for(i = 0; i < n; i++) {           // O(n)
        for(j = 0; j < n; j = j * 3) { //  O(n)*O(log n)
            a++;                       // same i.e O(n)*O(log n)
        }
    }

    return 0;                          // O(1)
}

Time Complexity of the above program :

Time Complexity: O( n log n) (Linear Logarithmic time)

Why ?

When there is a loop loop inside a loop (nested -loop ) we multiply

The nested loops multiply: O(n) × O(log n) = O(n log n)

The dominating factor is O(n log n).


5 - Quadratic time complexity —> O(n²).

The running time is proportional to the square of the input size, often seen in algorithms with nested loops over the data.

example:


int main() {

    for(i = 0; i < n; i++) {         // O(n)
        for(j = 0; j < n; j++) {     // O(n)*O(n)
            a++;                     //O(n)*O(n)-->O(n²)

        }
    }

    return 0;                        // O(1): Return statement
}

Time Complexity of the above program :

Time Complexity: O(n²)(Quadratic time)

Why ?

  • The outer loop runs n times.

  • The inner loop runs n times for each outer loop iteration.

  • So, a++ runs n × n = n² times in total.

  • The return statement is constant time (O(1)).

  • O(n) × O(n) = O(n²)

  • The dominating factor among all is n² so TC is : O(n²)


6- Cubic time complexity —> O(n³).

The running time is proportional to the cube of the input size, usually found in algorithms with three nested loops

example:


int main() {

    for(i = 0; i < n; i++) {              // O(n)
        for(j = 0; j < n; j++) {          // O(n) * O(n) ---> O(n²)
            for(k = 0; k < n; k++) {      // O(n)* O(n) * O(n) ---> O(n³)
                a++;                      // O(n³) 

            }
        }
    }

    return 0;                             // O(1)
}

Time Complexity of the above program :

Time Complexity: O(n³)(Cubic time)

Why ?

  • When we find a loop inside a loop (nested) we multiply

  • O(n) × O(n) x O(n) = O(n³)

  • The dominating factor among all is n³ so TC is : O(n³)


7- Exponential time complexity —> O(2ⁿ).

Running time doubles with each new input element. it means the execution time multiplies by two, making it unsuitable for large datasets.

example:

The Fibonacci series is great example to explain about exponential TC. The below code snippet calculates and returns the nth fibonacci number.

long long int fib(int n) {
    if (n <= 1)//O(1)
        return n;//O(1)
    return fib(n - 1) + fib(n - 2);//O(2ⁿ)Two recursive calls per call
}

Time Complexity of the above program :

Time Complexity: O(2ⁿ)(Exponential time)

The dominating factor among all is 2ⁿ so TC is : O(2ⁿ).


Order of Time complexity in increasing order slowest to fastest:

O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(n³) < O(2ⁿ).

O(1) is considered the most efficient and much fastest among all and O(2ⁿ) is the least efficient and takes more time to execute.

Conclusion :

  • Understanding T.C helps in writing efficient code and predict how the algorithm performs as input size increases .

  • T.C helps in selecting efficient algorithm for a given problem or input size.

2
Subscribe to my newsletter

Read articles from dheeraj↗️ directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

dheeraj↗️
dheeraj↗️