Mastering the Basics: Understanding the Rules of Big-O Notation

Munir AbdullahiMunir Abdullahi
5 min read

Big-O Notation is a very important concept in the field of computer science. With the knowledge of Big-O Notation, we can improve the performance and scalability of algorithms. An algorithm's growth rate can be managed as the input size increases. In this article, we will discuss how to analyze Big-O Notation by learning about the rules and principles that govern its usage.

Rules of Big-O Notation

Calculating the algorithm complexity can be challenging. Let f(n) represent the algorithm complexity, and n, the number of inputs. With the following rules, we should be able to compute f(n).

  • Coefficient Rule: This rule states that coefficients/constant factors multiplying the input size, n can be ignored. This is because Big-O Notation focuses on the growth of n and when n grows to infinity, the coefficient is negligible.

    If f(n) is equal to O(g(n)), then kf(n) is equal to O(g(n)) where k is a constant for values greater than zero, and O(g(n)) is the time complexity.

    If an algorithm has a complexity of O(2n^3 + 3n + 1), this rule is telling us to ignore the values of "2", "3" and "1". The f(n) can be simplified to O(n^3).

    Let's look at the code example below.

      function someFunction(n) {
        for (let i = 0; i < n * 1000; i++) {
            console.log(i);
        }
      }
    

    By analyzing the code example, we can see the function has a for-loop, and its f(n) = 1000n. The coefficient "1000" can be ignored and the example can be said to have a Big-O notation of O(n).

    For small input sizes, the coefficient can affect the actual execution time. The coefficient rule addresses scenarios where the n is close to or at infinity.

  • Sum rule: This rule states that if the resultant time complexity is the sum of two different time complexities of an algorithm, the resultant Big-O notation is the sum of two different Big-O notations of these time complexities.

    Suppose an algorithm has two different operations. One operation has a time complexity of O(f(n)) and the other a time complexity of O(g(n)). The overall time complexity according to the sum rule is O(f(n) + g(n)).

    Let's consider the code example below.

      function sumRule(inputArray1, inputArray2) {
          // Operation X
          for(let i = 0; i<inputArray1.length; i++) {
              console.log(inputArray1[i])
          }
          //Operation Y
          for(let i = 0; i<3*inputArray2.length; i++) {
              console.log(inputArray2[i])
          }
          return count
      }
    

    By analyzing the code above, we see that there are two different operations, Operation X, and Operation Y. Each loops through an array and logs the array elements to the console. In Operation X, the time complexity is O(n) while Operation Y has a time complexity of O(3n).

    According to the Sum rule, the f(n) of the algorithm is the sum of these operations. It can be expressed as n + 3n which equals 4n. By applying the coefficient rule, the final result is O(n) = n.

    It is important to note that these operations are independent which contributes to the overall complexity. This rule does not imply that these operations were combined or executed together.

  • Product rule: This rule states that if the time complexities are multiplied, then Big-O is multiplied equally.

    If f(n) and g(n) are algorithm complexities with Big-O notations O(h(n)) and O(p(n)) respectively. According to the product rule, f(n)g(n) is equal to O(h(n)p(n)).

    A use case is estimating the overall complexity of nested loops in Big-O notation. Let's have a look at the example below.

      function sumRule(inputArray1, inputArray2) {
          // Operation X
          for(let i = 0; i<inputArray1.length; i++) {
              console.log(inputArray1[i])
              //Operation Y
              for(let i = 0; i<3*inputArray2.length; i++) {
                  console.log(inputArray2[i])
              }
          }
          return count
      }
    

    From the code example, we have operation X is a loop that has a nested loop, operation Y. The f(n) of operation X is n and that of operation Y is 3n. According to the product rule, the overall f(n) is 3n^2. By applying the coefficient rule, the final result is n^2. Equally, the Big-O notations of both complexities can be resolved to be O(n^2).

  • Transitive rule: This rule states that the time complexity of a function is the same as another provided they are bounded together.

    Let's look at this instance. Assuming the time complexity of function A is bounded by the time complexity of function B, and the time complexity of function B is bounded by function C, we can conclude that the time complexity of function A is also bounded by function C. Thus, all these functions have the same Big-O notation.

  • Polynomial rule: The polynomial rule states that polynomial time complexities have a Big-O notation of the same polynomial degree. It is often described by a polynomial function of input size of n.

    Mathematically, f(n) is a polynomial of degree k, then f(n) is O(n^k),

    Let's analyze the code below.

      function a(n){
          var count =0;
          for (var i=0;i<n*n;i++)
          {
              count+=1;
          }
          return count;
       }
    

    This function has one for-loop. The algorithm complexity f(n) is n\n which is n^2. Since f(n) has a polynomial of degree of 2, we can conclude based on the polynomial rule that the Big-O has a quadratic time of O(n^2)*.

  • Log of a power rule: This rule states that a constant within a log function can be ignored. Mathematically, log(nk) = O(log(n)) for values of k greater than zero. Let's take a look at this example.

      function exampleLogarithmic(n) {
        for (var i = 2; i <= 3 * n; i = i * 2) {
          console.log(i);
        }
      }
    

    The above code has a loop condition of i<=3n which makes the expression of f(n) to be log(3n). According to this rule, the presence of the constant factor 3 does not affect the overall growth rate as it remains within the logarithmic term. Hence, the time complexity of log(3n) in Big-O notation is still O(log n).

Conclusion

It is essential to understand the rules of Big-O notation to analyze and compare the efficiency of algorithms. As a result of these rules, we can express the time complexity of an algorithm based on its growth rate relative to the size of the input.

Found this article helpful?

Feel free to find me around the web for questions and clarifications

Thanks for reading to the end

3
Subscribe to my newsletter

Read articles from Munir Abdullahi directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Munir Abdullahi
Munir Abdullahi

I am a backend developer who is also interested in technical writing and open-source development. I am very open to collaboration, mentorship, and internship roles.