written 2.8 years ago by |
Asymptotic Notation : -
Asymptotic Notation is used to describe the running time of an algorithm - how much time an algorithm takes with a given input, n.
There are three different notations:
- Big O Notation
- Big Theta (Θ) Notation
- Big Omega (Ω) Notation
Big-Θ is used when the running time is the same for all cases, Big-O for the worst case running time, and Big-Ω for the best case running time.
Big Θ notation : -
We compute the big-Θ of an algorithm by counting the number of iterations the algorithm always takes with an input of n. For instance, the loop in the pseudo code below will always iterate N times for a list size of N. The runtime can be described as Θ(N).
Big-O Notation : -
The Big-O notation describes the worst-case running time of a program. We compute the Big-O of an algorithm by counting how many iterations an algorithm will take in the worst-case scenario with an input of N. We typically consult the Big-O because we must always plan for the worst case. For example, O(log n) describes the Big-O of a binary search algorithm.
Big-Ω Notation : -
Big-Ω (Omega) describes the best running time of a program. We compute the big-Ω by counting how many iterations an algorithm will take in the best-case scenario based on an input of N. For example, a Bubble Sort algorithm has a running time of Ω(N) because in the best case scenario the list is already sorted, and the bubble sort will terminate after the first iteration.
Time Complexity and Space Complexity : -
Time Complexity -
The time complexity of an algorithm is the amount of time taken by the algorithm to complete its process as a function of its input length, n. The time complexity of an algorithm is commonly expressed using asymptotic notations:
- Big O - O(n),
- Big Theta - Θ(n)
- Big Omega - Ω(n)
Space Complexity -
The space complexity of an algorithm is the amount of space (or memory) taken by the algorithm to run as a function of its input length, n. Space complexity includes both auxiliary space and space used by the input.
Analysis of the TIme Complexity : -
Big O notation expresses the run time of an algorithm in terms of how quickly it grows relative to the input (this input is called “n”).
Constant Time Complexity: O(1)
When time complexity is constant (notated as “O(1)”), the size of the input (n) doesn’t matter. Algorithms with Constant Time Complexity take a constant amount of time to run, independently of the size of n. They don’t change their run-time in response to the input data, which makes them the fastest algorithms out there.
Linear Time Complexity: O(n)
When time complexity grows in direct proportion to the size of the input, you are facing Linear Time Complexity, or O(n). Algorithms with this time complexity will process the input (n) in “n” number of operations. This means that as the input grows, the algorithm takes proportionally longer to complete.
Logarithmic Time Complexity: O(log n)
Algorithms with this complexity make computation amazingly fast. An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size. This means that instead of increasing the time it takes to perform each subsequent step, the time is decreased at a magnitude that is inversely proportional to the input “n”.
Quadratic Time Complexity: O(n²)
In this type of algorithms, the time it takes to run grows directly proportional to the square of the size of the input (like linear, but squared).
In most scenarios and particularly for large data sets, algorithms with quadratic time complexities take a lot of time to execute and should be avoided.