Understanding How Big O Notation is Calculated: A Detailed Guide

SAI GOUTHAMSAI GOUTHAM
5 min read
💡
DSA-1.2

Welcome to the second post of our series on data structures and algorithms. In the previous post, we introduced the concept of Big O Notation and why it's crucial for writing efficient, scalable code. In this post, we’ll dive deeper into how Big O is calculated and help you develop an intuition for analyzing the performance of your algorithms.

Recap: What is Big O Notation?

Big O Notation is used to describe the time complexity (how long it takes for an algorithm to run) and space complexity (how much memory it requires) as the size of the input increases. Big O notation focuses on the worst-case scenario, giving us an upper bound on the performance of an algorithm.

The goal is to provide a high-level understanding of how algorithms behave as input sizes grow.

Steps to Calculate Big O

1. Identify the Input

The first step is to determine what the input to the algorithm is. The input size is usually denoted by n. For example, if your algorithm processes an array, the size of that array is the input size, n.

Example:

pythonCopy codedef print_items(arr):
    for item in arr:
        print(item)

In this case, the input size n is the length of the array arr.

2. Break the Code into Basic Operations

Every algorithm consists of basic operations like:

  • Arithmetic (addition, subtraction)

  • Assignments

  • Comparisons

  • Loops

  • Function calls

Each of these operations has a constant time complexity—i.e., O(1), meaning that it executes in the same amount of time regardless of the input size.

Let’s break down the previous example:

pythonCopy codedef print_items(arr):
    for item in arr:  # Loop runs n times
        print(item)   # Printing is a constant time operation (O(1))
  • The for loop runs n times, where n is the length of arr.

  • Each iteration of the loop performs a constant time operation print(item).

So, the overall time complexity of this code is O(n), because the loop dominates the complexity.

3. Focus on Loops and Recursion

Most of the time, loops and recursive functions dictate the complexity of an algorithm. Let’s examine the following scenarios:

  • Single Loop: A single loop that iterates over the input size n will have O(n) time complexity.

Example:

pythonCopy codedef print_items(arr):
    for item in arr:
        print(item)  # O(1) operation, repeated n times -> O(n)
  • Nested Loops: When loops are nested, the time complexity multiplies.

Example:

pythonCopy codedef print_pairs(arr):
    for i in arr:
        for j in arr:
            print(i, j)  # O(1) operation, repeated n^2 times

In this case, the outer loop runs n times, and for each iteration of the outer loop, the inner loop runs n times. Therefore, the overall complexity is O(n^2).

  • Recursion: For recursive functions, complexity depends on how many times the function is called and the size of the problem at each call. For example, a binary search algorithm, which divides the problem in half at each step, has a time complexity of O(log n).

4. Drop Constants and Non-Dominant Terms

Big O notation simplifies the complexity by dropping constant factors and focusing on the term that grows the fastest as n increases.

For example:

  • O(2n) becomes O(n), since the constant factor (2) is irrelevant for large inputs.

  • O(n + 10) becomes O(n), as the constant (10) is also negligible for large n.

  • O(n^2 + n) becomes O(n^2), because n^2 dominates as n grows.

Common Big O Complexities

Let’s go over some common Big O complexities and their characteristics:

  1. O(1) – Constant Time: The algorithm’s runtime does not change with the input size.

Example:

pythonCopy codedef get_first_item(arr):
    return arr[0]  # Always takes the same amount of time
  1. O(n) – Linear Time: The algorithm’s runtime grows linearly with the input size.

Example:

pythonCopy codedef print_items(arr):
    for item in arr:
        print(item)  # Loop runs n times
  1. O(log n) – Logarithmic Time: The algorithm reduces the problem size by a constant factor at each step.

Example: Binary Search

  1. O(n^2) – Quadratic Time: The algorithm’s runtime is proportional to the square of the input size. This often happens with nested loops.

Example:

pythonCopy codedef print_pairs(arr):
    for i in arr:
        for j in arr:
            print(i, j)
  1. O(2^n) – Exponential Time: The algorithm’s runtime doubles with each additional input. This is typically seen in recursive algorithms without optimization.

Example: Solving the Fibonacci sequence with recursion

  1. O(n!) – Factorial Time: The algorithm’s runtime grows factorially with input size, typically in brute-force algorithms.

Time vs. Space Complexity

It’s important to note that Big O can be used to analyze both time complexity and space complexity.

  • Time complexity refers to how long an algorithm takes to run, based on the input size.

  • Space complexity refers to how much memory an algorithm uses.

For example, recursive algorithms may have higher space complexity due to the function call stack.

Conclusion

Understanding how to calculate Big O is crucial for analyzing the efficiency of your algorithms. By breaking down your code into basic operations, analyzing loops and recursive functions, and focusing on the most dominant terms, you can accurately assess the time and space complexity of your solutions.


In the next post, we’ll explore some specific algorithm examples and analyze their time and space complexity in depth.

Stay tuned! Happy Coding!

0
Subscribe to my newsletter

Read articles from SAI GOUTHAM directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

SAI GOUTHAM
SAI GOUTHAM

💻 Experienced Computer Science graduate with 3+ years in software engineering, specializing in full-stack web development and cloud solutions. 🥇 Proficient in Python, JavaScript, and SQL, with expertise in React.js, Node.js, Django, and Flask. 🎖️ Skilled in optimizing system performance and deploying scalable applications using AWS. Strong background in agile methodologies and DevOps practices. 🥅 Committed to delivering high-quality, efficient, and scalable software solutions.