Intro to Algorithms: Part 1
This article is split into to two separate sections that each cover important topics regarding the basics of Algorithms & some of the knowledge needed to apply these on specific types of software problems.
The first section will address:
- The technical definition of an Algorithm
- The Time & Space complexity and the Asymptotic Notation of Algorithms
The second section will cover:
- The Fundamental Algorithms
- Application of these Algorithms through Problem-solving techniques
- Different classes of Problems
Algorithms are generally used in software development to make software more efficient by handling the main logic of the desired problem-solving solution. These solutions solve problems sequentially by applying the given logical execution steps one after the other. Examples of Algorithms outside of software can be seen in simple mathematics operations; when adding and subtracting numbers, the method of carrying the the 1 to the tens column from the units is a small logic algorithm.
Similarly in software development, algorithms can be transferred and utilized between all programming languages due to their use of the underlying logic of programming languages., despite the fact that specific algorithms tailored for certain problem classes and/or for certain computing architectures. [GPU's; CPU's]
One of the complexities that arises from the ability to apply the same logical problem-solving techniques across all logic based languages is a measurement aspect to determine the scalability and usability of the algorithm, as the variations like compilation time of the language, computing power of the devices lend themselves to difficult comparisons to establish base line performance of an algorithm. Time&Space Complexity and Asymptotic Notation were established to assist in creating these baselines for algorithms when solving problems so these could be compared across languages, architectures and compilation times.
Time & Space Complexity
The time complexity of the algorithm refers to the number of required operations that are performed in direct relation or as a function of the size of the input into the algorithm. The amount of operations required is related to time as each operation performed is assumed to take a a fixed time to complete.
While Time refers to operations performed, space complexity alludes to the memory space required by the algorithm of its life cycle. This total space associated with algorithms can be described two manners, namely, the constant sections which are independent of the size of the inputs for the problem [size of the program, imports of modules]. Whereas, the variable sections required space is exclusively dependent on the problem input size. Therefore, the total space complexity can be depicted using this formula:
Space Complexity S(n) = Number of Constants + Number of Variables
S(n) = C + V
Asymptotic Notation
This type of mathematical notation is used to describe the execution run time of an algorithm as a function of the given problem inputs. Big Theta Notation is computed by calculating the amount of iterative steps the algorithm will always take based on the given input of n, for example this for will iterate N time for the size of the list N.
This is described as Θ(N) ```for each item in list: print item
Big-O is used to describe the worst possible execution runtime for an algorithm. We compute the Big-O by observing the amount iterative steps the algorithm will take in the worst-case scenario with an input of N. We typically consult the Big-O because we must always plan for the worst case. For example, O(log n) describes the Big-O of a binary search algorithm.
Big-Ω (Omega) alludes to the best possible execution runtime. We compute the big-Ω by counting how many iterative steps the algorithm will take in the best-case based on an input of N. For example, a Bubble Sort algorithm has a running time of Ω(N) because in the best case scenario the list is already sorted, and the bubble sort will terminate after the first iteration.
Examples:
![Big_O_Notation.jpg](https://cdn.hashnode.com/res/hashnode/image/upload/v1660648564777/X7dBdFe-W.jpg align="left")
##O(1)
public boolean isFirstNumberEqualToOne(List numbers) { return numbers.get(0) == 1; }
O(1) represents a function that always takes the same take regardless of input size.
##O(n)
public boolean containsNumber(List numbers, int comparisonNumber) { for(Integer number : numbers) { if(number == comparisonNumber) { return true; } } return false; }
O(n) represents the complexity of a function that increases linearly and in direct proportion to the number of inputs. This is a good example of how Big O Notation describes the worst case scenario as the function could return the true after reading the first element or false after reading all n elements.
##O(n^2)
public static boolean containsDuplicates(List input) { for (int outer = 0; outer < input.size(); outer++) { for (int inner = 0; inner < input.size(); inner++) { if (outer != inner && input.get(outer).equals(input.get(inner))) { return true; } } } return false; }
O(n^2) represents a function whose complexity is directly proportional to the square of the input size. Adding more nested iterations through the input will increase the complexity which could then represent O(n3) with 3 total iterations and O(n4) with 4 total iterations.
##O(2^n)
public int fibonacci(int number) { if (number <= 1) { return number; } else { return fibonacci(number - 1) + fibonacci(number - 2); } }
O(2^n) represents a function whose performance doubles for every element in the input. This example is the recursive calculation of Fibonacci numbers. The function falls under O(2^n) as the function recursively calls itself twice for each input number until the number is less than or equal to one.
##O(log n)
public boolean containsNumber(List numbers, int comparisonNumber) { int low = 0; int high = numbers.size() - 1; while (low <= high) { int middle = low + (high - low) / 2; if (comparisonNumber < numbers.get(middle)) { high = middle - 1; } else if (comparisonNumber > numbers.get(middle)) { low = middle + 1; } else { return true; } } return false; } ``` O(log n) represents a function whose complexity increases logarithmically as the input size increases. This makes O(log n) functions scale very well so that the handling of larger inputs is much less likely to cause performance problems. The example above uses a binary search to check if the input list contains a certain number. In simple terms, it splits the list in two on each iteration until the number is found or the last element is read.
I hope this brief expose on algorithms has provided some explanations on topics i struggled with when first introduced to them.
Part 2 will follow this one shortly, See you guys then.✌️
Subscribe to my newsletter
Read articles from Gary Meade directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by