Time complexity

MOHIT GhanselaMOHIT Ghansela
3 min read

Time complexity is a simple concept in computer science. It helps us understand how fast or slow an algorithm works based on how much data it processes. It matters a lot when writing efficient code that runs quickly, especially with large data.

Why Time Complexity Matters

Efficient algorithms are important because slow algorithms can delay tasks, cause frustration, and even break down systems when dealing with a lot of data.

Notations

To talk about time complexity, we use three main notations:

  • Big O (O): The upper limit on running time. Think of it as the worst-case time the algorithm might take.

  • Big Omega (Ω): The lower limit on running time. This is the best-case time the algorithm could take.

  • Big Theta (Θ): When an algorithm’s best and worst cases grow at the same rate, we say its time is Θ. It’s a tight bound.

    Time Complexity Cases

    Algorithms can behave differently depending on the input. We often look at three cases:

    • Best Case (Ω): When everything goes in our favor. Example: finding an item at the start of a list (only one check, Ω(1)).

    • Average Case (Θ): The typical scenario. Example: finding an item in the middle of a list (half as many checks, Θ(n)).

    • Worst Case (O): When things go badly. Example: not finding an item or it’s at the end of the list (checking every element, O(n)).

Common Time Complexity Examples

O(1) – Constant Time

  • In simple words: The time it takes does not change, no matter how big the input is.

  • Example: Getting one book from a shelf when you know its exact spot.

  • Why: You go straight to that spot every time.

O(log n) – Logarithmic Time

  • In simple words: You cut the work in half each time, so it grows very slowly.

  • Example: Guessing a number between 1 and 100 by always picking the middle (start at 50, then 25 or 75, etc.).

  • Why: Each guess rules out half the numbers.

O(n) – Linear Time

  • In simple words: The time grows directly with the number of items.

  • Example: Looking for your name on a list by reading each name one by one.

  • Why: You check every name until you find it.

O(n log n) – Log-Linear Time

  • In simple words: You split the work in half many times and do a little work for each split.

  • Example: Sorting cards by splitting your deck into smaller piles and then merging them.

  • Why: You split (log n times) and then sort/merge (n steps each time).

O(n²) – Quadratic Time

  • In simple words: The time grows by the square of the number of items.

  • Example: Checking every pair of students in class to see if they share the same birthday.

  • Why: For each student, you compare with every other student.

O(2ⁿ) – Exponential Time

  • In simple words: The time doubles with each extra item.

  • Example: Trying all possible ways to turn lights on/off in a row of switches. Each switch added doubles the combinations.

  • Why: Every new switch doubles the total possibilities.

    Conclusion

    Time complexity is a powerful tool for understanding and improving the performance of programs. By recognizing how algorithms behave as data grows, you can write faster, more efficient code that scales well. Remember these basic cases, and you’ll be better equipped to tackle any problem, no matter the size of the data.

0
Subscribe to my newsletter

Read articles from MOHIT Ghansela directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

MOHIT Ghansela
MOHIT Ghansela