Big O Notation

Jamisha BadeJamisha Bade
5 min read

I remember reading a programming book which said, “Write Code That Is Clean And Reusable“. I always wondered what “Clean And Reusable Code“ meant. As a beginner in programming, it was really out of my level to understand the difference between brute-force code and an effective code. After learning DSA, I learned what it meant by clean and reusable code.

Algorithm is the step-by-step procedure to solve problems. It is a universal truth that there are multiple ways to solve problems. Some ways are more efficient than others. The same concept works while writing an algorithm in our code.

So how do I know if my code is efficient or not?

  • Time is crucial. We want our code to run and solve problems as fast as possible.

  • Space is key. We do not have endless amount of memory in our system. Thus, we need to be mindful of the code’s memory usages.

This means that Space and Time Complexity ensures an efficient code.

Types of Time Complexity And Their Notations

  • Worst Case Time Complexity: Big O

  • Best Case Time Complexity: Big Omega

  • Average Case Time Complexity: Big theta

We use worst-case time complexity because it ensures consistency, safety, and performance predictability, regardless of the input. It’s the safest way to measure how an algorithm behaves under pressure.

So What is Big O?

Big O notation is the language we use for talking about how long an algorithm takes to run and how much memory it takes. We consider the upper bound run time of the algorithm (which just means worst case). The time isn’t considered in certain unit like we know; E.g. seconds, milliseconds. Instead, we can think of time as the number of operations or steps it takes to complete a problem of a certain size n.

To better explain these time complexities, I’ll use a real-life example involving a phone book that I learned from the CS50 lectures.

Constant Time— O(1)

O(1), or constant time, means that an algorithm runs in the same amount of time regardless of the size of the input.

Imagine a dictionary of people’s phone numbers, where each section is labeled with the first letter of last names—A, B, C, and so on. If you're looking for someone whose last name starts with "S," you can immediately flip to the "S" tab. This action takes the same amount of time whether there are 100 names or 10,000—because you're jumping straight to the correct section without having to scan through everything. That’s what makes it constant time.

Linear Time - O(n)

Time increases directly with the size of the input.

Searching for a name in an unsorted phonebook. You start at the beginning and check each entry one by one until you find it. You might have to check till n names.

Logarithmic Time- O(log)

The time increases slowly as the input size increases. You cut the problem in half each time. This is an example where you are looking up a name in a sorted phonebook using binary search. You open the book in the middle:

  • If the name comes before, go to the first half

  • If it comes after, go to the second half.

  • repeat these steps until you find the result.

Linearithmic Time-O(n log n)

  • Time grows faster than linear, but slower than quadratic. It usually appears in efficient sorting algorithms.

  • Example: Merge Sort or Quick Sort algorithms. Imagine sorting a shuffled deck of cards:

    • You divide the deck into smaller parts (log n),

    • Then sort each and merge them back (n).

    • So total time becomes n × log n.

O(n²) – Quadratic Time

Time increases with the square of the input size. Imagine you have a list of students and you want to see if any two have the same name. One way to do this is to compare each name with every other name. So, the first name is checked against all the others, the second name is also checked with the rest, and so on. If there are 10 names, that means lots of repeated comparisons—like nested loops in code. As the list gets longer, the number of comparisons grows much faster, which is why this process takes quadratic time, or O(n²).

Dropping Constant and non-dominant terms

When analyzing the time complexity of algorithms, Big-O notation helps us focus on how the runtime scales as the input size grows very large. To simplify this, we drop constant factors and less significant terms because they become irrelevant for large inputs.

Why drop constants?

Constants don't change how fast the runtime grows as input size increases. For example:

  • An algorithm that takes 3n steps is considered O(n), not O(3n).

  • The constant factor 3 only affects the speed by a fixed multiple, but it doesn’t affect the overall growth rate.

Why drop non-dominant terms?

When an algorithm has multiple terms, the one that grows fastest dominates for large inputs. For example:

  • Consider: f(n) = n² + 100n + 50

  • As n gets very large, n² grows much faster than 100n or 10.

  • So, in Big-O terms, f(n) is simplified to O(n²) because the lower-order terms don’t matter for large n.

Conclusions

While writing small programs, it might seem unnecessary to prioritize efficiency. However, in large-scale enterprises handling millions of data points, efficient algorithms and data structures become the backbone of success and sustainability. Optimizing performance at scale ensures faster processing, lower costs, and better user experiences.

A 2021 study from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), reported by Rachel Gordon, indicates the extraordinary impact of algorithm improvements on computing performance. By analyzing over 1,100 research papers across many algorithm families, MIT scientists found that algorithmic advancements have often outpaced hardware improvements like those predicted by Moore’s Law, especially for large-scale data problems. These gains are not gradual but occur in significant leaps, making algorithms a crucial driver of performance in today’s data-intensive world. As computing hardware reaches physical limits, MIT experts emphasize that smarter algorithms will become the main way to boost efficiency, reduce energy consumption, and support sustainable technological growth. This is particularly important in rapidly growing fields like artificial intelligence and big data, where handling humongous amounts of information quickly and efficiently is essential. This research reinforces why mastering algorithms is not only fundamental for computer science but also essential for innovating in an era where both speed and environmental impact matter.

0
Subscribe to my newsletter

Read articles from Jamisha Bade directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jamisha Bade
Jamisha Bade