In this article, we will discuss greedy methods vs dynamic programming. Both of them are used for optimization of a given problem. Optimization of a problem is finding the best solution from a set of solutions.
An algorithmic paradigm known as a greedy algorithm assembles a solution piece by piece, always opting for the component that provides the most glaringly evident and immediate benefit. Therefore, Greedy works best in situations where selecting a locally optimal solution also yields a global one. Take the fractional knapsack problem, for instance. The item with the highest value to weight ratio should be selected according to the local optimal approach. Due to the fact that we are permitted to take fractions of an item, this technique also results in the global optimal solution. The main benefit of dynamic programming is an improvement over simple recursion. Dynamic Programming can be used to optimise any recursive solution that makes repeated calls for the same inputs. To avoid having to recompute the results of subproblems later, the idea is to just store them. With this straightforward optimization, time complexity is reduced from exponential to polynomial. For instance, if we create a straightforward recursive solution for the Fibonacci Numbers, the time complexity increases exponentially; but, if we optimise it by storing the answers to the subproblems, the time complexity decreases to linear.
Dynamic programming is reliable while Greedy Algorithms is not always reliable.
One significant distinction between greedy algorithms and dynamic programming is that the former first make a greedy option, or the choice that seems best at the time, while the latter solve a consequent subproblem, without bothering to address all potentially connected smaller subproblems.
|Greedy ALgorithm||Dynamic Programming|
|It does not guarantee an optimal solution||It always guarantees an optimal solution|
|When there is greedy way of selecting a solution, this comes into picture||When there are overlapping sub problems, DP is used|
|It does not consider future choices||Keeps track of future/ previous choices|
|The selected choice is locally optimal||The selected choice is globally optimum|
|Examples: Fractional Knapsack, Job Scheduling, Huffman coding||Examples: Knapsack Problem, matrix chain multiplication|
For greedy algorithms, given that there is no need to revisit or modify earlier solutions or values, it is memory-efficient. A greedy algorithm never revisits or modifies the prior values or solutions when computing the solution.
Generally speaking, they are quicker than dynamic programming methods.
An example would be the O(ELogV + VLogV)-time Dijkstra's shortest path algorithm.
Let's take a look at Dijkstra's algorithm to get an idea of greedy method.
function dijkstra(G, S) for each vertex V in G distance[V] <- infinite previous[V] <- NULL If V != S, add V to Priority Queue Q distance[S] <- 0 while Q IS NOT EMPTY U <- Extract MIN from Q for each unvisited neighbour V of U tempDistance <- distance[U] + edge_weight(U, V) if tempDistance < distance[V] distance[V] <- tempDistance previous[V] <- U return distance, previous
Above is the pseudo code of Dijkstra's algorithm. The algorithm is used to find the shortest distance between two given nodes. In order to identify the next best answer, the algorithm takes a greedy approach in the hope that the final result will be the best solution for the entire problem. But, it's not necessary that the solution given is going to be the shortest distance. At an instant of time, Dijkstra's algorithm chooses a node which is closest to the start vertex, although, this might be farther from the destination vertex. This way greedy technique might not be able to give optimal solutions everytime.
In case of Dynamic Programming, the temporal complexity is lowered from exponential to polynomial. For instance, computing can transform a recursive solution into a dynamic programming issue. In this, decisions are taken at each stage taking into account the present issue at hand as well as the answer to a prior sum-problem that has been solved. This will be used to determine the best value or answer.
A dynamic programming problem will always have an optimal solution, it is a given.
Here, a globally optimal solution has been picked as the best option. It employs a formula that would have been applied on the storage of previously computed state values. Memorization of the dynamic programming table is necessary. The intricacy of the memory is increased. It moves relatively more slowly. An example of an O(N^3)-time Floyd-Warshall'salgorithm.
Let us take a look at Floyd-Warshall's algorithm.
n = no of vertices A = matrix of dimension n*n for k = 1 to n for i = 1 to n for j = 1 to n Ak[i, j] = min (Ak-1[i, j], Ak-1[i, k] + Ak-1[k, j]) return A
Here, we initailize a matrix which keep track of distances between each node and the shortest path between any two nodes is updated with each iteration. This way, in dynamic programming problems, there is an array or matrix which stores the solution so that it can be fetched whenever needed (required in overlapping sub problems).
**So why not just choose DP always? **
Greedy is faster when it offers the right answer since it only works with one subproblem, therefore we don't use dynamic programming for all problems. Additionally, only when there are overlapping subproblems does dynamic programming give optimal solutions.
How do we choose between greedy and DP?
It is advisable to use the Greedy Approach to resolve the problem if it satisfies the Greedy Choice Property.
Use dynamic programming to resolve it if it contains overlapping subproblems.
What to do if a problem can be solved by both Greedy technique and DP?
There are questions like "coin change" problems which can be solved by both greedy approach and Dynamic programming paradigms. In such cases, it might be better to use greedy algorithm because it is faster since it solves only optimal subproblem but Dynamic programming solves all the subproblems.
Hence, in conclusion we know where to use greedy approach and where to use dynamic programming.