1. |
Main Idea |
In a Greedy Algorithm, the choice which seems the best at the current step is chosen to build an optimal solution. |
In Dynamic Programming, the decision made at each step is through considering the solution of the current problem and solution to previously solved sub-problems to build a Global Optimal solution |
2. |
Approach |
It always computes the solution in a sequence manner, and it does not look at the previous states. |
It uses the bottom-up or top-down approach by breaking down a complex problem into simpler problems. |
3. |
Decision Sequence |
It generates a single decision sequence. |
It may generate many sequence decisions. |
4. |
Recursion |
The optimum solution is obtained without revising the previously generated solutions. |
It considers all the possible sequences to obtain the optimum solution. |
5. |
Result |
The result of the algorithm may depend on choices made so far and not on future choices or all the solutions to the sub-problem. |
It considers all possible cases by evaluating sub-problems at first, to get the most optimal result. |
6. |
Set of Solutions |
It contains a particular set of feasible sets of solutions. |
There is no special set of feasible set of solutions. |
7. |
Optimality |
Here is no such guarantee of an Optimal Solution unless Local Optimality leads to Global. |
Dynamic Programming will always generate an Optimal Solution. |
8. |
Overlapping Sub-problems |
Sub-problems in greedy algorithm do not overlap. |
Sub-problems in dynamic programming are overlapping. |
9. |
Efficiency |
This never alters the earlier choices, thus making it more efficient in terms of memory. |
This prefers memorization due to which the memory complexity increases, making it less efficient. |
10. |
Speed |
This is faster than dynamic programming. |
This is comparatively slower. |
11. |
Reliability |
Less reliable because no guarantee of optimal Solution. |
Highly reliable because guaranteed optimal Solution. |
12. |
Time Complexity |
Polynomial. |
Polynomial, but usually worse than the greedy approach. |
13. |
Space or Memory Complexity |
It is efficient only in terms of memory as there is no option to look back or revisit previous choices. So, there is no need for extra space unless the program explicitly demands it. |
It requires tabulation like DP table for memorization which increases its space complexity to store results of previous states. |
14. |
Examples |
Fractional knapsack, Binary Knapsack problem, shortest path, Dijkstra, Prim's algorithm, Job scheduling problem, Activity selection problem, Huffman Coding, Optimal storage on tapes, Optimal merge pattern |
0/1 Knapsack, Assembly line scheduling, Multistage graph, Matrix chain multiplication, Make a change problem, and Longest Increasing Subsequence. |