DSA -Analysis

we have some steps to construct an algorithms, In those steps analysis is the last one. But before start the Problem solving we need to study the analysis part very well then only we can see our algorithms works well or not.

To find optimized code we need two mater to consider.

  1. Time complexity
  2. Space Complexity

Our Target should be lesser Time complexity and lower space complexity. Nowadays Time is more important than space as we can configure extra space today. Time machine is not available now.

Asymptotic Notations:

  1. Big O -> Worst case scenario (Important
  2. Omega -> Best case scenario
  3. Theta -> Average case scenario

There are two types of analysis:

  1. A posterior Analysis - Depends on language, compiler and type of hardware used - Exact analysis
  2. A priory Analysis - Independent of language, compiler and type of hardware used - Approximate analysis - Big O

A priory Analysis:

Order of magnitude of a statement i.e A number of times any statement can run

Notes:

  • A time complexity is loop only
  • Not only loops, a single statement also considerable but we should consider always bigger numbers.
  • If there is no loop, time complexity is O(1)

Complexity classes:

  1. Constant - O(1)
  2. Lograthemic - O(log n)
  3. Linear - O(n)
  4. Quadratic - O(n\square)
  5. Cubic - O(n3)
  6. Polynomial - O(nc)
  7. Exponential - O(Cn)
1
Subscribe to my newsletter

Read articles from Sivaraman Arumugam directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sivaraman Arumugam
Sivaraman Arumugam

I am a data engineer who is responsible for designing, building, maintaining, and testing the infrastructure and systems that are used to store, process, and analyze data. I work closely with data scientists and analysts to ensure that the data pipelines and systems are able to support the data needs of an organization. I have a strong background in computer science and software engineering, and skilled in programming languages such as Python, Java, and SQL also familiar with database systems and big data technologies like Hadoop, Spark, and NoSQL databases. Some of my key responsibilities as a data engineer: Designing and building data pipelines to extract, transform, and load data from various sources Setting up and maintaining data storage and processing systems, including data warehouses and data lakes Collaborating with data scientists and analysts to understand their data needs and ensure that the data infrastructure can support their requirements Performing data quality checks and troubleshooting any issues that arise Implementing security and privacy measures to protect sensitive data