Big O Time Notation

Creating algorithms is similar to navigating through a vast library in search of the perfect book, aiming to minimize both time and effort. Just as you seek step-by-step instructions to solve a specific problem, developers craft algorithms for efficient solutions.

Algorithms act as navigation guides from point A to point B, and Big O is similar to evaluating the efficiency of each route in terms of time and distance. Developers strive to design algorithms with optimal Big O complexities, much like choosing the fastest route with minimal traffic and detours, to minimize computation time and maximize performance.

But what exactly is Big O?

Well, join me as we delve into this topic and find out more.

What is Big O?

Let's just say that this nifty concept, Big O also known as the 'Upper Bound, gives us a peek into how an algorithm's speed and memory usage changes as we throw more data its way. It assesses the time or memory requirements based on input and serves as a generalized estimation representing the maximum growth rate or worst-case scenario for the algorithm's time complexity.

Essentially, it's like a roadmap showing us the potential ups and downs of an algorithm's performance as the input size grows. However, it's important to note that Big O isn't an exact science; think of it more as a well-informed guess rather than a precise measurement.

The Most Common Big O's

Before going any further, let's familiarize ourselves with some of the most common Big O complexities.

  • Linear Time- O(n): The algorithm's runtime grows proportional to the input size.

  • Constant Time - O(1): It indicates that the algorithm's runtime remains consistent regardless of the input size.

  • Quadratic Time- O(n^2): The execution time of the algorithm increases rapidly with the size of the input, growing proportional to the square of the input size. For every element, the time taken increases quickly.

  • Logarithmic Time- O(log n): As the size of the input grows, the time taken for an algorithm to execute increases slowly. This occurs because the algorithm divides the problem into smaller parts, often halving the remaining work at each step. Binary search is an example of an algorithm with logarithmic time complexity.

  • Linearithmic Time- O(n log n): The algorithm's runtime grows in proportion to n multiplied by the logarithm of n. Linearithmic time complexity is common in algorithms that divide their input into smaller subsets similar to divide and conquer algorithms and process each subset individually before combining them again.

With a grasp of what Big O is all about, how about we illustrate their practical application by exploring examples that demonstrate how each one operates in various algorithmic scenarios?

O(n)- Linear Time:

Why opt for printing elements in a list to showcase linear time?

Because the time taken to print each item depends on how long the list is. As the list size grows, the number of iterations also increases proportional to the input size, resulting in a linear relationship between the input size and the time taken to print all the elements.

def print_all_elements(arr:list):
    for element in list:
        print(element)

Let's walk through the function print_all_elements to gain a better understanding.

The function works by taking a list as its input and going through each element in the list arr one by one using a nested for loop. For each iteration of the loop, the current element is printed.

The number of times the function prints is directly tied to how many elements are in the list. So, if there are five elements in the list, the function will perform 5 print statements. The same goes for having 50 elements. The function will perform 50 print statements in that case. In short, the more items in the list, the more print operations are executed.

O(1)- Constant Time

Now let's take a look at constant time. Accessing an array element is a perfect way of demonstrating this. This is because when accessing an array element, it is directly retrieved from a specific memory location using its index. This process involves a simple arithmetic operation to calculate the memory address which is unaffected by the array's size. This means the time taken to access any element remains constant regardless of the size of the array.

def get_element(arr:list):
    return arr[0]

The function get_element accepts a list arr as its input. Within the function, it returns the element at index 0 which is the first element of the input list. The operation of retrieving the first element of the list is a constant time operation because it directly accesses the memory location of the first element in the array. The time taken to access the element is not dependent on the size of the array's length.

O(n^2) - Quadratic Time

To shed light on quadratic time, we are taking a look at printing all pairs in a list. This operation falls under quadratic time because it requires iterating over the list once, and for each element, iterating over the remaining elements to create pairs. This results in a time complexity of O(n^2), where n represents the number of elements in the list.

def print_all_pairs(list):
    for i in list:
        for j in list:
            print(i,j)

The print_all_pairs function works like this.

It accepts a list as an argument and uses a nested for loop. For each element in the outer loop, it goes through all elements in the inner loop, printing the pair i,j each time.

The more elements in the list, the more times the function prints. If there are 6 elements in the list, it prints 36 times because it does 6 * 6 print statements.

It is somewhat similar to calculating the area of a square by multiplying the side length by itself. It makes more print statements as the list gets bigger.

O(log n) - Logarithmic Time

Binary search is a classic example of an algorithm with logarithmic time complexity. This is because it divides the search space in half with each comparison, reducing the number of elements to consider in each step. By continually halving the search range, binary search efficiently narrows down the possibilities until it identifies the target element.

What does this look like? Let's take a look.

def binary_search(list,target):
    low = 0
    high = len(list) - 1

    while low <= high:
        mid = (low + high) // 2

        if list[mid] == target:
            return mid
        elif list[mid] < target:
            low = mid + 1
        else:
            high = mid - 1

    return -1

The function implemented is the binary search algorithm.

It takes two parameters, a sorted list, and a target value. Two variables are initialized, low and high to the beginning and end of the list, respectively. It then enters a while loop that continues if the low is less than or equal to the high. It calculates the midpoint mid within the loop and compares the element at this midpoint to the target.

If they match, the index of the target is returned. If the target is less than the element at the midpoint, the search is restricted to the left half of the list by updating the high variable to the index value of the element before the midpoint. If the target is greater, the search is restricted to the right half by updating the low variable to the index value of the next element in the list. This process continues until the target is found or low and high cross each other indicating that the target is not in the list.

The time complexity of the algorithm is O(log n), where n is the size of the input list, as each iteration reduces the search space by half.

O(n log n)- Linearithmic Time

Finally, we'll explore quicksort to have a deeper understanding of linearithmic time.

Quicksort is an efficient sorting algorithm with a time complexity of O(n log n) due to its divide-and-conquer approach. By using a chosen pivot, and recursively calling the input array, quicksort reduces the number of comparisons needed.

def quicksort(arr:list):
    if len(arr) < 2:
        return arr
    else:
        pivot = arr[0]
        less = [i for i in arr[1:] if i <= pivot]
        greater = [i for i in arr[1:] if i > pivot]
        return quicksort(less) + [pivot] + quicksort(greater)

How does it work?

The function quicksort implements the quicksort algorithm which is a sorting algorithm that operates by selecting a pivot element from the array and partitioning the other elements into two subarrays according to whether they are lesser or greater than the pivot. It then recursively sorts the sub-arrays.

The function takes a single argument arr which is a list of elements to be sorted. Because the algorithm uses recursion, you have to tell it when to stop recursing. This is where the base case comes in. The base case checks if the length of the input list arr is less than 2. If this condition is true, it means that the list is already sorted so it returns an empty list or a list containing 1 element.

If the base case is not met, the first element of the input list arr is selected as the pivot element. The remaining elements excluding the pivot are partitioned into two subarrays:

  1. less which contains elements less than or equal to the pivot

  2. greater which contains elements greater than the pivot

The function quicksort is recursively called on both subarrays until each sub-array contains only one element or is empty. In other words, when the size of the sub-array becomes 0 or 1, it is considered sorted by definition. The sorted less subarray, the pivot, and the greater subarray are concatenated together to form the final sorted list which is returned.

In the worst case, if the pivot is consistently chosen as the smallest or largest element, that may lead to the array being partitioned in an unbalanced manner. This results in one subarray with nearly all elements and the other with few elements. In such cases, the time complexity is O(n^2). However, in the average case, quicksort divides the array into roughly equal halves during each partitioning step, leading to a time complexity of O(n log n). This time complexity makes it an efficient sorting algorithm for large datasets.

Important Concepts

When it comes to Big O, it's crucial to keep these concepts in mind or take note of them.

  • Big O focuses on how the algorithm's efficiency grows concerning input size.

  • Constants are dropped as the Big O complexity is meant to describe the upper bound of an algorithm. The constant is eventually irrelevant.

  • In Big O, the worst-case scenario usually serves as the primary metric for measurement.

In addition, there are other ways of measuring the time complexity of an algorithm other than using Big O. However, the easiest and most effective way to use is the Upper Bound.

Conclusion

In a nutshell, creating efficient and scalable algorithms relies heavily on mastering the world of Big O. By analyzing the growth rate or worst-case scenarios, Big O provides invaluable insights into an algorithm's performance. This understanding allows developers to design algorithms that not only perform well in typical situations but can also handle more complex scenarios. Plus, it's like having a wise old owl guiding you through the maze of picking the perfect data structures and algorithms.

18
Subscribe to my newsletter

Read articles from Abigail Robinson directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Abigail Robinson
Abigail Robinson