# Jump Game II: Minimum number of jumps to last index

#### Algorithms Dynamic Programming (DP) Greedy Algorithms

Get FREE domain for 1st year and build your brand new site

In this post, we will explore various ways to solve the minimum number of jumps to last index (Jump Game II) problem. In this, we will use ideas of Dynamic Programming and Greedy Algorithm.

1. Problem Statement
2. Naive Approach
3. Approach 2: Dynamic programming (Bottom Up)
4. Approach 3: Dynamic programming (Top Down)
5. Optimal Greedy Approach

Prerequisite: Dynamic Programming, Greedy Algorithm, nth Fibonacci

This problem is similar to Leetcode problem 45. Jump Game II.

## Problem Statement

We are given an array of non-negative integers, initially we are positioned at the first index of the array.
Each element represents the maximum jump length from that position to another.
The goal is to reach the last index in the minimum number of jumps.
Assume that you can always reach the last index.

#### Example 1

Input: [2, 3, 1, 1, 4]
Ways to reach last index
1. 2 -> 1 -> 1 -> 4, 3 steps
2. 2 -> 3 -> 4, 2 steps

Output: 2

Explanation:

• Jump 1 step to 3
• Jump 3 steps to 4, final index

#### Example 2

Input: [1, 1, 2, 1, 2, 1, 9]
Ways to reach last index
1. 1 -> 1 -> 2 -> 2 -> 9, 4 steps
2. 1 -> 1 -> 2 -> 1 -> 2 -> 9, 5 steps
3. 1 -> 1 -> 2 -> 2 -> 1 -> 9, 5 steps
4. 1 -> 1 -> 2 -> 1 -> 2 -> 1 -> 9, 6 steps

Output: 4

Explanation:

• Jump 1 step(0 - 1)
• Jump 1 step(1 - 2)
• Jump 2 steps(2 - 4)
• Jump 2 steps(4 - 6) final index

## Naive Approach

In this approach we will use recursion to solve the problem.
The base case will be triggered when the algorithm reaches the last index, then the algorithm terminates.
The algorithm will recursively call for all elements reachable from the first element. That is, it will explore all branches in the recursion tree and return the minimum number of jumps to reach the last index.

As you can see there are repeated re-computations of same values. We shall see how to avoid them in the next approach.

#### Algorithm

1. Initialize jumps to a max value.
2. Traverse through the list, from start index recursively call for elements reachable from start index until minimum is found;
3. Return minimum.

#### Code

``````int minJumps(vector<int> &nums, int l){
if(l >= nums.size() - 1)
return 0;

int jumps = INT_MAX;

for(int i = l+1; i <= l + nums[l]; i++)
jumps = min(jumps, 1 + minJumps(nums, i));

return jumps;
}

int jump(vector<int>& nums) {
return minJumps(nums, 0);
}
``````

### Analysis

• The time complexity is $O\left({2}^{n}\right)$, that is there are n possible ways to move from an element for each element in the list.
• Space complexity is O(1) without including the stack space used for the recursive calls.

## Approach 2: Dynamic programming (Bottom Up)

In this approach the algorithm will optimize the naive recursive solution to a quadratic time complexity which is a little bit better.
Before we get to it, a little about dynamic programming, we use dynamic programming to optimize recursive problems whereby we call a recursive function for repeated inputs which results in unnecessary re-computations. Generally a dynamic programming problem has the following characteristics

• Overlapping subproblems
• Optimal substructure

Our problem has overlapping subproblems in that finding the global solution involves solving the same subproblem multiple times. Can you spot the overlapping subproblems from the previous recurrence tree?

The optimal substructure property comes in whereby the overall optimal solution is constructed using optimal solutions of the subproblems. The minimum number of jumps is a combination of optimal steps made to reach the last index.

Dynamic programming approach reduces time complexity for problems with an exponential time complexity to polynomial time. We will see how to optimize the naive approach to a quadratic time complexity, it is not better than the greedy approach which we shall see, but it is a good example to show how dynamic programming works.

Note: There are cases where the greedy approach does not apply and the initial naive solution gives an exponential time complexity. An example is finding the nth fibonacci number, the recursive approach yields an exponential time complexity, when dynamic programming approach is implemented the time complexity reduces to linear time.

Back to the jump game problem, to solve it dynamically, we will store results of subproblems in an array to eliminate repeated work, this is called tabulation, results are stored in a table(array).

We solve a subproblem and using the result from the subproblem we solve another subproblem ans so on building up the final solution step by step.
You can read more on dp approaches at the link provided at the end of this post.

#### Example

Given the array [2, 3, 1, 1, 4]
We will create an auxiliary array store[]
We fill the first index with 0, store[0, ...], this denotes that we will need 0 steps to reach the first index.
After the first iteration store will have [0, 1, ....] denoting 1 step to reach the second index.
After the second iteration store will have [0, 1, 2, ...] denoting 2 steps to reach the end. We use a conditional statement that will push the index i to the maximum position then break out of the loop.
We return the last element in the store array which will be the minimum number of steps to reach the last index.

#### Algorithm

1. Create an auxiliary array to store minimum jumps needed to reach nums[i] from nums[0].
2. Implement a nested loop, the inner loop traverses the segments and stores the minimum jumps to an index in the auxiliary array. The outer loop traverses the whole list.
3. At each step in the loop the algorithm finds the minimum number of jumps to reach nums[i] from the start and adds this value to the auxiliary array.
4. Return the last jump which will be the minimum number of jumps.

#### Code

``````int jump(vector<int>& nums) {
int n = nums.size();
vector<int> store(n);
store[0] = 0;

for(int i = 1; i < n; i++){
store[i] = INT_MAX;
for(int j = 0; j < i; j++){
if(i <= nums[j] + j && store[j] != INT_MAX){
store[i] = min(store[i], store[j] + 1);
break;
}
}
}
return store[n-1];
}
``````

#### Analysis

• The time complexity is , that is we perform n jumps for each element in the list of size n.
• Space complexity is O(n), additional space is needed to store the auxiliary array.

## Approach 3: Dynamic programming (Top Down)

In this approach we recursively find solutions to smaller subproblems which we shall use to solve the bigger problem.
Whenever we solve a subproblem recursively, we cache its result in an auxiliary array and when it is called again, the recursive call will takes constant time to retrieve the result from the auxiliary array. This is called memorization.
We will use the same recursive algorithm in the naive implementation but additionally implement caching, we use store auxiliary array for that.
Store[] will store all results from previous recursive calls which shall be used in subsequent steps in the algorithm.

#### Algorithm

1. Initialize an auxiliary array that will store results from recursive calls.
2. The algorithm solves the problem recursively just like in the naive approach but instead of repeated computations, results are stored and used for subsequent computations.

#### Code

``````vector<int> store;

int minJumps(vector<int> &nums, int start){
if(start >= nums.size() - 1)
return 0;
if(store[start])
return store[start];

int minJump = 10000;
for(int i = start + 1; i <= start + nums[start]; i++)
minJump = min(minJump, 1 + minJumps(nums, i));
store[start] = minJump;

return store[start];
}

int jump(vector<int>& nums) {
store = vector<int>(nums.size());
return minJumps(nums, 0);
}
``````

Note We used 10000 as a constraint for the maximum length for the input list.

#### Analysis

• The computational time and space complexity is same as the bottom up approach.

## Optimal Greedy Approach

In this approach we use a greedy algorithm which makes an optimal choice at each step in the algorithm building the solution piece by piece, that is from a position we determine the next steps that will push the index close to the last index.

#### Example

Given the array [1, 1, 2, 1, 2, 1, 9]
We initialize left and right pointers, l, r to point to the start and end of a segment and count which will store the number of jumps made.
While r is less than array size.
First Iteration
l = 1, r = 1, [1]
maxReach = max(0, 1)
jumps = 0 + 1

Second Iteration
maxReach = max(1, 2) [1, 2]
l = r+1 = 2
r = maxReach = 2
jumps = 1+1 = 2

Third Iteration
maxReach = max(2, 4) [2, 1, 2]
l = r+1 = 3
r = maxReach = 4
jumps = 2+1 = 3

Fourth Iteration
maxReach = max(4, 6) [2, 1, 9]
l = r+1 = 4
r = maxReach = 6
jumps = 3+1 = 4

The while condition terminates, jumps variable is returned.

#### Algorithm:

1. Initialize left and right pointers which will be used to point to the start and end of a segment, a segment refers to the steps a particular index can move. Initialize jumps variable that will be used to store the number of jumps made so far which will be the minimum.
2. While right pointer is not at the end of the list.
• loop through the segment and maximize the reach.
3. Finally update the pointers, left pointer to be at right + 1 and right pointer to be at the maximum reach.
4. The outer loop terminates when right is at last index, return the return jumps.

#### Code

``````int jump(vector<int>& nums) {
int l = 0, r = 0, jumps = 0;
int n = nums.size();
while(r < n-1){
int maxReach = 0;
for(int i = l; i < r+1; i++)
maxReach = max(maxReach, nums[i]+i);
l = r+1;
r = maxReach;
jumps += 1;
}
return jumps;
}
``````

### Analysis

• Time Complexity is linear time O(n). The algorithm traverses the list once, the outer loop goes through the whole list while the inner loop traverses the segments at each step. All loops traverse the list once.
• Space complexity is O(1) no extra space is used.

### Questions

Another simple problem that uses dynamic programming approach is the "nth fibonacci", Can you solve it recursively, come up with a recursion tree, analyze it and optimize using a dynamic programming approach?

With this article at OpenGenus, you must have the complete idea of solving the problem Jump Game II: Minimum number of jumps to last index.

#### Erick Lumunge

Erick Lumunge is a Computer Science Student at Kenya Methodist University. He is a passionate programmer who loves to solve real world problems using code.

Improved & Reviewed by: