In this article, we will discuss what is Divide and Conquer technique and how is it helpful. We will see various examples which uses the Divide and Conquer approach in their algorithms.
What we'll see,
- Divide and Conquer
- Quick sort algorithm
- Strassen's Matrix multiplication
- Karatsuba Algorithm for fast multiplication
Divide And Conquer
Divide and Conquer technique can be divided into three parts:-
- Divide: The big initial problem is divided into smaller instances as sub-problems simillar to the initial problem.
- Conquer: The sub-problems are being solved using recursion until the problem is not solved.
- Combine: Combining the sub-problems solutions to get the final solution of the original big problem.
Many a times, divide and conquer paradigm is used to find an optimal solution of a problem. The basic idea underneath it is to decompose a given problem into smaller simillar instances which can be easily solved with straight-forward method, solve those smaller problems and to compose their colutions to solve the original given problem. Problems of sufficient simplicity are solved directly. For example, to sort a given list of n natural numbers pslit it into two lists of about n/2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the original given list of numbers. This algorithmic approach is commonly known as merge sort algorithm.
Divide an conquer technique is used in many algorithms, for examples:-
- Merge-sort is a sorting alogrithm which divides the array into two half arrays, recursively sorts each of them and in the end, merges the two sorted halved arrays. It has a time complexity of O(nlog(n)) in every configuration either best, worst or an average.
- Cooley-Tukey Fast Fourier Transform algorithm is traditional algorithm for Fast Fourier Transform(FFT). Underneath FFT, it uses divide and conquer approach and has a time complexity of O(nlog(n)) time.
- Closest Pair of Points is an another standard problem solved using divide and conquer approach. It is a problem, where we have to find the closest pair of points in a set of points in xy coordinate plane. The problem can be solved in quadratic time by calculating the distances of each and every pair of points and comparing those distances to find the minimum distance. But the solution using divide and conquer solves the problem in O(nlog(n)) time, much faster than the quadratic time of the naive approach.
- Tower of Hanoi is a problem in which there are three towers N blocks in decreasing order of size on tower A and we have to move all these blocks to tower C with the help of tower B. Rules are that we have to move only one block at a time and block can never be on top of a smaller block.
- Divide and conquer approach is used in finding the maximum and minimum elements in an array given.
- Divide and conquer technique is used to find convex hull. For more information, checkout this link.
- The problem to find the median of the the sorted merged sequence of two sorted sequences given can be solved using divide and conquer.
- Binary tree can be constructed from in-order and post-order traversal.
- Divide and conquer concept is used to solve Wiggle sort problem (sorting a sequence of numbers in a wave form).
- Searching for an element in a 2-Dimensional array can be done using divide and conquer approach.
Quick sort algorithm
Quick sort algorithm is a divide and conquer algorithm, it picks an element as pivot and partitiions the given array around the pivot element picked. Pivot element can be picked from the array in following ways:- * First element of the array as the pivot value * Last element of the array as the pivot value * Random element in the array as the pivot value The key process in quick sort is partition(). Target of partitions is, given an array and an element of array as pivot, putting pivot at its correct position in sorted array and putting all smaller elements before pivot, and all greater elements after pivot. This process is done in linear time complexity.
Python code for quicksort algorithm
def partition(array, low, high): pivot = array[high] i = low - 1 for j in range(low, high): if array[j] <= pivot: i = i + 1 (array[i], array[j]) = (array[j], array[i]) (array[i + 1], array[high]) = (array[high], array[i + 1]) return i + 1 def quick_sort(array, low, high): if low < high: pi = partition(array, low, high) quick_sort(array, low, pi - 1) quick_sort(array, pi + 1, high)
Strassen's Matrix Multiplication Algorithm
Starssen's Matrix Multiplication is multiplication technique used to multiply two matrices of size (nxn) each, where Strassen's Matrix Multiplication technique can be performed only on square matrices where n is a power of 2 or log2(n) should be an integer. For explanation, let us take two matrices, naming them A and B. Algorithmic steps for Strassen's Matrix Multiplication are as follows:-
A = [[a, b], [c, d]]
B = [[e, f], [g, h]]
- Firstly, dividing the (nxn) sized matrices A and B into 4 sub-matrices of (n/2 x n/2) size.
- Then, calculating the following values recursively: ae + bg, af + bh, ce + dg and cf + dh, where these values are the values of the resultant matrix naming it as C.
So, the main idea behind it is to use the divide and conquer technique in this algorithm as dividing the matrix A and B into 8 sub-matrices and then recursively compute the sub-matrices of C, ultimately, getting the values of the matrix C.
Consider the following matrices as A, B, and C.
There will be 8 recursive calls:
a * e
b * g
a * f
b * h
c * e
d * g
c * f
d * h
The strategy is the basic O(n3) strategy. As using the Master Theorem with T(n) = 8T(n/2) + O(n^2), we will get a runtime of O(n3).
According to Strassen, there is a solution possible in which we don't have need to execute 8 recursive calls to get the matrix C, but can be done in only by executing 7 recursive calls with some extra addition and subtraction operations.
According to him, those 7 calls are as follows:-
a * (f - h)
(a + b) * h
(c + d) * e
d * (g - e)
(a + d) * (e + h)
(b - d) * (g + h)
(a - c) * (e + f)
The resultant matrix C’s new quadrants are as follows:-
The time complexity using the Master Theorem,
T(n) = 7T(n/2) + O(n2) = O(nlog(7))
Mathematically, it comes out to be approx O(n2.8074) which is better than O(n3).
- Firstly, dividing the matrix A and matrix B in 4 sub-matrices of size (n/2 x n/2) as shown in the above diagram.
- Then, calculating the 7 matrix multiplications recursively.
- Now, computing the sub-matrices of C.
- And, combining these sub-matricies to obtain our resultant matrix C.
Time complexity for the algorithm
- The time complexity of Strassen's Multiplication in worst-case configuration is O(n2.8074) with linear time in best-case complexity.
- This alogrithm has space complexity of O(log(n)).
Karatsuba Algorithm for fast multiplication
Karatsuba algorithm is an algorithm which uses divide and conquer approach for fast multiplication. Let us take a problem where we are given two binary strings that represent value of two integers, find the product of two strings. For example, if the first bit string is "1100"" and second bit string is "1010", output should be 120 as 1100 and 1010 are 12 and 10 in decimal representation respectively. For simplicity, let the length of two strings be same and be n. There is a naive approach for solving this problem which is to follow the process we have studied in school, that is, one by one take all bits of second number and multiply it with all bits of first number. And finally, add all of the multiplications. This algorithm takes O(nm) time, where n and m are the length of the binary strings given.
Using divide and conquer approach, we can multiply two integers in less time complexity than the product of their lengths. We divide the given binary numbers into two halves. Let the given numbers be A and B. For simplicity let us assume length of the strings is n, which is same and n is even.
A = al * 2n/2 + ar
B = bl * 2n/2 + br
, where al, ar, bl, and br contain leftmost and rightmost n/2 bits of A and B respectively.
The product AB can be written as:-
AB = (al * 2n/2 + ar)(bl * 2n/2 + br)
AB = 2nal*bl + 2n/2(al*br + bl*ar) + br*ar
Looking at the above formula, there are four multiplications of size n/2. Basically, it implies that we have divided the problem of size n into four sub-problems of size n/2. But still, this doesn't help, as the solution of recurrence T(n) = 4T(n/2) + O(n) comes out to be bounded by O(n2). The problematic and important part of this algorithm is to change the middle two terms to some other form, so that, there is only one extra multiplication which would be sufficient. The tricky expression for the middle two terms is as follows:-
al*br + ar*bl = (al + ar)(bl + br) - al*bl - ar*br
This makes the final value of AB as:-
AB = 2nal*bl + 2n/2 * [(al + ar)(bl + br) - al*bl - ar*br] + ar*br
With the above trick, the recurrence becomes T(n) = 3T(n/2) + O(n) and solution of this recurrence is O(n1.59) which is much lesser than O(n2).
For handling the different lengths of the binary strings given, we append 0’s in the beginning. For handling the odd length of the binary string, floor(n/2) bits are appended to the left half and ceil(n/2) bits are appended to the right half. So the expression for AB becomes as follows:-
AB = 22ceil(n/2) al*bl + 2ceil(n/2) * [(al + ar)(bl + br) - al*bl - ar*br] + ar*br
Karatsuba algorithm can be used for any base.
Python Implementation for Karatsuba algorithm
def karatsuba(x,y): """Function to multiply 2 numbers in a more efficient manner than the grade school algorithm""" if len(str(x)) == 1 or len(str(y)) == 1: return x*y else: n = max(len(str(x)),len(str(y))) nby2 = n / 2 a = x / 10**(nby2) b = x % 10**(nby2) c = y / 10**(nby2) d = y % 10**(nby2) ac = karatsuba(a,c) bd = karatsuba(b,d) ad_plus_bc = karatsuba(a+b,c+d) - ac - bd prod = ac * 10**(2*nby2) + (ad_plus_bc * 10**nby2) + bd return prod
In this article, we have discussed the divide and conquer technique approach in depth with various examples such as Quick sort algorithm, Strassen's Matrix multiplication, Karatsuba Algorithm for fast multiplication, etc.