The Art of Time Complexity

On hearing Time Complexity
, many assume it’s about how long a piece of code takes to run. But that’s only part of the story. Execution time can vary based on factors like hardware, servers, and network speed.
Time complexity, on the other hand, is about understanding how your code scales as the input grows. Meaning!? 🤔 Time Complexity
is a mathematical function which describes how numbers of inputs affects the number of operations. 🤷 Time Complexity shows how the operations increases on the increase of input size.
The concept of Time Complexity
emerged from the broader field of algorithm analysis and computational complexity theory belonging to Computer Science*.*
It is a crucial part of Data Structures and Algorithms.
Data structures are ways of organising and storing data in a computer so that it can be accessed and used efficiently.
Algorithms are step-by-step procedures or formulas designed to solve problems by processing data efficiently.
Time complexity != Time Taken
Let’s take an example;
Suppose we have a functions say printMeow
which takes an array of integers. This function will always take same amount of time to get executed as it does not depend on the array size. So doesn’t matter if the array is empty or has 1 million elements.
// printMeow function does not depend on array numbers.
void printMeow(List<int> numbers){
print('MEOW');
}
Now consider sumArray
function which prints all the elements of array. In this case the execution will take much more time as the length of array increases.
// size of array numbers matters here.
int sumArray(List<int> numbers) {
for (i=o;i<numbers.length; i++){
print(i);
}
}
Here if array is empty, no operations will be executed. So zero numbers of operations of input of size 0.
Now if array consists 100 elements then 100 operations will be executed and so 100 operations for inout of size 100.
If we plot a graph for this relations by considering input of n size on
x-axis
and numbers of operations ony-axis
then we can join points that arex=y
.
This is called Linear Time Complexity represented by O(n) (Pronounce : Big O of n) which we will discuss in details soon.
Notation for Time Complexity : O(n)
🗒️ Note :
When discussing time complexity, we focus on larger inputs because scalability matters — a website handling 1 million users needs more efficiency than one with just 1,000 users*. That’s why it’s called* Big O*. We often consider the* worst-case scenarios to ensure robust performance in real-world softwares.
🤔 Now you might ask why do we need Time Complexity and why should I know about this!? Let’s understand.
Time complexity isn’t just about speed — it’s about understanding how algorithms scale.
When input size grows, execution time changes. Time complexity gives you a formula to predict that growth, independent of hardware or environment. It’s a tool to:
Analyse Performance: Predict how your code behaves as data increases.
Compare Algorithms: Choose the most efficient solution for large-scale problems.
Optimise Early: Spot inefficiencies before they become bottlenecks.
So, it is not just a theory, but a technique to write better code that can handle any scenario, executes faster, and scale the software smoothly as data grows.
Note : As we discussed Time Complexity is mathematical function and a mathematical function can have infinite numbers of values, Time Complexity can have infinite numbers of value too.
Let’s explore some polular Time Complexity examples to understand how to calculate the time complexity. ( Also note that we will neglect constant and small values and focus on larger values affecting the execution)
Throughout the examples we will assume that the model machine used takes;
1 unit time for arithmetic and logical operations
1 unit time for assignment and return statements
Example 1 : Constant Time Complexity [ O(1) ]
int sum(int a, int b) {
return a + b;
}
void main() {
print(sum(5, 6));
}
The above code will take 2 units of time(constant):
one for arithmetic operations and
one for return. (as per the above conventions).
Therefore total cost to perform sum operation (Tsum) = 1 + 1 = 2
Time Complexity = O(2) = O(1), since 2 is constant. Therefore no matter what value of a and b is the Tsum will always be same and hence constant.
This is called Constant Time Complexity.
Constant Time Complexity
Example 2: Linear Time Complexity [ O(n) ]
The example of sumArray
function, where we were was a Linear Time Complexity and we can calculate Tsum like this;
int i = 0 happens once, taking 1 unit.
Condition checks: n + 1 units (Runs n + 1 times — once extra when the loop fails)
print(i)
runs n times so n units.Increment:
i++
happens n times so n unit.
total = 1 + (n + 1) + n + n
total = 3n + 2
Ignore constants and lower-order terms → O(n)
Another real world example of this type is found in Linear Search Algorithm. When we linearly search for an element by index one by one , in worst case we traverse whole array.
Linear Time Complexity
Example 3: Quadratic Time Complexity [ O(n*n) / O(n*m) / O(n*n*n) ….]
Quadratic Time Complexity can also occur when you have two or more nested loops iterating over different arrays. Let’s see an example:
void printAllPairs(List<int> arr1, List<int> arr2) {
for (int i = 0; i < arr1.length; i++) {
for (int j = 0; j < arr2.length; j++) {
print('${arr1[i]}, ${arr2[j]}');
}
}
}
In this code:
Outer loop runs n times (where n = arr1.length).
Inner loop runs m times for each iteration of the outer loop (where m = arr2.length).
Total operations = n * m
TprintAllPairs = 1 (outer init) + n (outer condition) + n (outer increment)
+ n * (1 (inner init) + m (inner condition)
+ m (inner print) + m (inner increment))
TprintAllPairs = n + n * (1 + 3m) = n + 3nm + n = 3nm + 2n
TprintAllPairs = n + n * (1 + 3m) = n + 3nm + n = 3nm + 2n
Ignore constants and lower-order terms → O(n * m)
Quadratic Time Complexity
Example 4 : Log(n) Time Complexity [ O(log(n)) ]
Log(n) complexity is shown in Binary search.
If we want to find some element in array of size n, rather than linearly finding it we can apply binary search to reduce number of operations by following below steps.
Sort the array acceding order.
check if the middle element is smaller or larger than the value we want to find.
divide array in appropriate portion where the value lies and again repeat step 2 with new array until you get the value.
Now instead of dividing array in half each time let’s go in reverse and think in a way we are doubling the size. (As shown in white board example)
This way we have to double the array of size 1 , suppose x times until getting the original array size n.
Then we have the equation
1*(2*2*2…x times) = n
i.e. 2*2*2…*2 (x times) = n
i.e. x = numbers of operations = log(n) = O(log(n))
Log(n) Time Complexity
Note that the O(log(n)) is better than the O(n) as the the number of operations are lesser here.
Other popular Time Complexities include:
Other Common Time Complexities:
O(n * log(n)): Found in efficient sorting algorithms like Merge Sort and Quick Sort.
O(2ⁿ): Exponential Time Complexity, often seen in recursive algorithms — and yes, it’s as bad as it sounds.
O(n!): N factorial, rarely seen, but terrifying when it shows up. For context, 4! = 4 × 3 × 2 × 1 = 24 — way faster growth than ²⁴ = 16 (Exponential complexity).
Here’s the graph showing comparison of all Time Complexity.
This graph visually compares different time complexities. As input size grows, O(1) stays constant, O(log n) and O(n) scale efficiently, while O(n log n) grows moderately. O(x²), O(x³), O(2ˣ), and O(n!) explode rapidly — highlighting why picking the right algorithm is crucial for performance.
Understanding time complexity isn’t just theory — it’s the key to writing code that scales. As the input size grows, the right algorithm can mean the difference between milliseconds and hours.
That’s it for this article! I hope you learned something new and got a clearer understanding of time complexity.
If you found this article valuable or have any questions, feel free to reach out on on LinkedIn and Twitter and follow for more such articles. I’d love to hear your thoughts and connect! Let’s keep learning and growing together. 🌱
👋 See you all in next article. Till then keep learning, keep sharing :)
Subscribe to my newsletter
Read articles from avni prajapati directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

avni prajapati
avni prajapati
She is Avni, a passionate Flutter developer who loves turning ideas into dynamic, user-friendly apps. Beyond coding, she thrives on sharing knowledge. Whether it’s writing clear, insightful documentation, crafting articles, or speaking at tech events, she aims to make complex concepts approachable.