×

Search anything:

Introduction to Dynamic Programming

Binary Tree book by OpenGenus

Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

Reading time: 30 minutes | Coding time: 10 minutes

Whether you dream to be a seasoned competitive programmer or want to dive in the field of software development, you need to be thorough with the things called "Data Structures and Algorithms", and the technique we will be discussing today is one of the most important technique to master these. I am talking about Dynamic Programming!

In today's post, we will cover the following:

  • The crux of Dynamic Programming, (DP) hereafter
  • Different methodologies for solving DP problems
  • How to identify a DP problem?
  • Hands on in a DP problem

What is Dynamic Programming?

Dynamic Programming is essentially based on the idea of doing smart work over hard work. It is a technique to solve a complex problem by breaking it down into smaller sub-problems recursively. The smartness lies in the idea of remembering the solutions to sub-problems so that they can be used for similar sub-problems and thereby eliminate the need to recompute them in future.

DP is simply an optimization over naive recursive approach, reducing the exponential time-complexity of recursive solutions to polynomial time complexity.

Different methods for solving DP problems

There are mainly 2 ways to approach a DP problem:

  1. Top Down or Memorization technique
  2. Bottom Up or Tabulation technique

Top down approach

This is the simplest approach to come up with, wherein we start from the top-most state and start breaking down the problems. As we traverse down, we either come across an already solved problem, for which we simply return the solution, or we some across an unsolved problem, which we compute and then save in the memory(hence, also called memoization).

Let's write a program for Fibonacci series using the memoization technique.

def fib(n, tab): 

	# Base case 
	if n == 0 or n == 1 : 
		tab[n] = n 

	# If the value is not calculated previously then calculate it 
	if tab[n] is None: 
		tab[n] = fib(n-1 , tab) + fib(n-2 , tab) 
        
	return tab[n] 

Bottom Up Approach

In this approach, we start from the most trivial subproblems and traverse our way up to the main problem. This ensures that the solutions to all the subproblems are computed and tabulated(hence, the name tabulation method), before the given problem.

Fibonacci program using tabulation method

def fib(n): 
	f = [0]*(n+1) 

	# base case 
	f[1] = 1

	for i in range(2 , n+1): 
		f[i] = f[i-1] + f[i-2] 
	return f[n] 

How to identify a DP problem?

In order to apply the concept of dynamic programming to a problem, we first need to classify whether it can be solved using this concept or not. Now that we have a basic idea of what DP is, let's try to think which properties should we look for.
First, we want there to be subproblems that are repeated in a recursive manner, otherwise there is no point to solve a subproblem and save it for future, if it isn't going to be used in future.
Second we want to ensure that we can find an optimal solution to the main problem by using optimal solutions of its subproblems, which means that we are targeting global optimization and not local optimization.
More formally, the above two properties are defined as Overlapping subproblems and Optimal substructure respectively.

Let's discuss them with an example

  1. Overlapping subproblems:
    Let's see how the Fibonacci series obeys this property
                          fib(5)
                     /             \
               fib(4)                fib(3)
             /      \                /     \
         fib(3)      fib(2)         fib(2)    fib(1)
        /     \        /    \       /    \
  fib(2)   fib(1)  fib(1) fib(0) fib(1) fib(0)
  /    \
fib(1) fib(0)

Here, we can see that values like fib(3), fib(2), fib(1) and fib(0) are repeated and therefore computing and saving them once, will save time when they are needed later.

  1. Optimal substructure:
    Let's consider the problem of All Pair Shortest Paths. We know that if we are able to find an intermediate node x, between source node v1 to destination node v2, which lies at the shortest distance from v1, then we are assured that the shortest path from source to destination is nothing but the one from v1 to x and then x to v2. This ensures that finding the optimal solution to subproblems leads us to optimal solution of the main problem.

Coding Time

Let's take up a problem of finding the number of times a string occurs as a subsequence in given string.

Consider the string OpenGenus, now, we want to find the occurrence of another string, let's say, en, in it whether continuous or discontinuous.

On analyzing it, we find that the solution for this case should be 3:

  1. OpenGenus
  2. OpenGenus
  3. OpenGenus

Let's see how to approach this:
We simply need to start processing the string either from left or right and the check whether the last characters of the considered strings match or not.

Algorithm:
Let the length of first string be m and that of second be n and we are traversing from right end.

If last characters of two strings match, we can do two things:
a) We will either consider last characters and get count for remaining strings. So we perform recursion for lengths m-1 and n-1.
b) Or we ignore last character of first string and recurse for lengths m-1 and n.

else if the characters don't match:
We simply ignore last character of first string and recursively find solutions for lengths m-1 and n.

We will understand the algorithm through the following example:
First string = "OpenGenus"
Second string = "en"
We need to count the number of times a string occurs as a subsequence of another string.

Let's construct a matrix to understand this:

E N
1 0 0
O 1 0 0
P 1 0 0
E 1 1 0
N 1 1 1
G 1 1 1
E 1 2 1
N 1 2 3
U 1 2 3
S 1 2 3

Explanation:

  1. Here we have our first string as a column and second as a row.
  2. The next step is to fill the 0th row with all zeros except for the first element. Similarly, fill the first column with '1s'. This is in accordance to our base conditions.
  3. Now we start filling the matrix elements one by one on the basis of whether the characters at the corresponding indices match or not.
  4. If they don't match, we simply copy the value in the previous row but same column.
  5. If they match, we add the value at previous row -previous column with the value at previous row - same column.
  6. Lastly, we return the value at the last row -last column.
  7. As, we can see it is 3 in this case, which is in accordance with our logic and hence verifies the concept of dynamic programming for such problems.

Pseudocode:

  1. Base conditions
    if the first string is null, then the first row should be filled with 0's that is matrix[0][i] = 0
    if the second string is null, then the first column should be filled with 1's that is matrix[i][0] = 1
  2. Filling the dynamic programming matrix
    Either starting from left or right of the strings,
    if string_1[i-1]== string_2[j-1], then matrix[i][j] = matrix[i-1][j-1] + matrix[i-1][j] (i and j are index iterators for the two strings)
    else matrix[i][j] = matrix[i-1][j]
  3. Return the value at last row -last column of the matrix that is matrix[m][n]

Code:

# Let's use tabulation method of DP to program this

def occur(a,b):
    m = len(a) 
    n = len(b) 
    
    tab = [[0]*(n+1) for i in range (m+1)]
    
    # Corner cases
    # if first string is null
    for i in range(n+1): 
        tab[0][i] = 0
    
    # if second string is null
    for i in range(m+1): 
        tab[i][0] = 1
        
    # filling the tab[][] in bottom up manner
    for i in range(1, m + 1): 
        for j in range(1, n + 1): 
            if a[i - 1] == b[j - 1]: 
                tab[i][j] = tab[i - 1][j - 1] + tab[i - 1][j]  
                  
            else: 
                tab[i][j] = tab[i - 1][j] 
  
    return tab[m][n] 
    
    
if __name__ == '__main__': 
    a = "OpenGenus"
    b = "en"
  
    print(occur(a, b)) 

Output: 3

Complexity analysis:

  • The time complexity of the above problem will be O(mn), due to the 2 for loops.
  • The space complexity will be O(mn), due to the auxiliary space occupied by the table.

By this, we come to the end of this article :)
I hope you are clear with the idea of dynamic programming and are ready to apply it to other problems!

Further reading:

If you found the article useful, do share it with others!

Introduction to Dynamic Programming
Share this