Lecture 01 - Introduction

Akash JaiswalAkash Jaiswal
5 min read

Basic Definition

  1. Data - refers to raw facts and figures (may not have any meaning)→ It is used in computation after processing. It has the following types :

    1. Primitive:

      • int – Integer values (e.g., …, -1, 0, 1,…)

      • float – Decimal numbers (e.g., -5.4, 0.0, 9.8, …)

      • char – Single characters (e.g., 'a', 'A', '%',… )

      • boolean – Logical values (true or false)

    2. Non-Primitive: arrays, lists, trees, graphs (to be covered later)

  2. Structure - refers to the way data is organised, stored, and related to each other in memory so that it can be used efficiently. Examples :

    • Linear: Arrays, Linked Lists, Stacks, Queues

    • Non-Linear: Trees, Graphs

    • Hash-based: Hash Tables, Hash Maps

  3. Algorithms - refers to well defined, finite sequence of steps needed to solve a problem

    • example - Searching, Shorting, etc.
  4. Data Structure and Algorithm (DSA) - refers to combining all above concept → study of organising data (data structures) and designing methods (algorithms) to process and manipulate that data efficiently. It helps in Problem solving, Efficiency, Scalability.

Time and Space Complexity -

what do you think the terms time and space complexity means in programming

  • The time took to execute a code and memory required to write code ?

  • This is what I used to think before I get to know what actually it meant, anyways the real meaning of these terms are as follows -

    • Time complexity - measure of the amount of time an algorithm takes to run as a function of the size of the input (n).

    • Space complexity - measure of the amount of memory an algorithm uses in relation to the input size. It tells you how much extra memory your algorithm uses besides the input. It includes :

      • Input storage - when algorithm makes a copy of (or modify) the input

      • Auxiliary space – additional memory used by the algorithm, such as:

        • Variables

        • Data structures (arrays, stacks, hash maps, etc.)

        • Function call stack (especially in recursion)


How would you know the program is good or bad ?

  • Is it enough if a program simply solves the given problem?
    Well, not really. Writing good code is about more than just getting the correct output.

  • Let me ask you something:
    Do you enjoy waiting at red lights? Probably not. Even a few seconds of delay can be frustrating. The same goes for users — when your code takes time to respond or starts buffering, it creates a poor user experience.

  • This is where time complexity comes in. It refers to the amount of time an algorithm takes to run as a function of the input size. Sometimes, you may need to write more lines of code just to optimize performance and reduce execution time.

  • There’s also space complexity, which is about how much memory your algorithm uses. While it's important, it's not a major concern at this stage — hardware storage can often be increased. So, in this series, we'll focus more on understanding and improving time complexity to write efficient code.


Example

Problem Summary:

You are given 9 balls — array[0] to array[8] — where:

  • 8 balls have equal weight

  • 1 ball is heavier than the rest

Your task is to find the index of the heavier ball.

Naive Approach (Linear Search)

We can write a simple loop that checks each ball one by one:

If we assume each comparison takes 1 second, then in the worst case, the loop will check 8 times, taking 8 seconds to find the heavier ball.

#include <iostream>
using namespace std;
int index(int* array){
    for(int i=0; i<=7; i++) {
        if(array[i] == 1) {return i;}
    }
    return 8;
}
int main() {
    int array[9]={0,0,0,0,0,1,0,0,0};
    cout << "Heavier ball found at index: " << index(array) << endl;
    return 0;
}

Optimized Approach (Divide & Compare)

grouping the balls into 3 sets and comparing the total weights of each group.

#include <iostream>
using namespace std;
int index(int* array){
    int A = array[0] + array[1] + array[2];
    int B = array[3] + array[4] + array[5];
    int C = array[6] + array[7] + array[8];
    if( A == B ) {
        if(array[6] > array[7]){return 6;}
        else if(array[6] < array[7]){return 7;}
        else{return 8;}
    }
    else if( B == C ) { 
        if(array[0] > array[1]){return 0;}
        else if(array[0] < array[1]){return 1;}
        else{return 2;}
    }
    else{
        if(array[3] > array[4]){return 3;}
        else if(array[3] < array[4]){return 4;}
        else{return 5;}
    }
}
int main() {
    int array[9] = {0, 0, 0, 0, 0, 1, 0, 0, 0};
    cout<< "Heavier ball found at index: "<<index(array);  
    return 0;
}

how many comparisons in the worst case?

  • 2 group sum comparisons: A == B, B == C

  • 2 individual comparisons: to pinpoint the heavier ball

Total: 4 comparisons implying 4 seconds


  1. Omega (Ω) — Best Case

    Describes the minimum time the algorithm could take.

    Example: Linear Search (if item is first) → Ω(1)

  2. Theta (Θ) — Average / Exact Bound

    Describes the tight bound — when best and worst cases grow the same.

    Example: Traversing an array → Θ(n)

  3. Big O (O) — Worst Case

    Describes the maximum time an algorithm can take. (Most commonly used )

    Example: Binary Search → O(log n), Bubble Sort → O(n²)

1
Subscribe to my newsletter

Read articles from Akash Jaiswal directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Akash Jaiswal
Akash Jaiswal