Open-Source Internship opportunity by OpenGenus for programmers. Apply now.
In this article, we have explored Master theorem for calculating Time Complexity of an Algorithm for which a recurrence relation is formed. We have covered limitations of Master Theorem as well.
Table of contents:
- Introduction to Algorithm Analysis
- Recurrence relations
- Master theorem
- Master theroem limitations
Introduction to Algorithm Analysis
An algorithm is a set of instructions or steps which are followed in order to complete a certain task or solve a particular problem, kind of like a recipe you follow to cook a particular meal.
As we all know there is usually more than one way to solve a problem or complete a task, this then implies that we can have several algorithms for a single task. Even though we may have several algorithms for a single task, some are usually better than others. Better in this context means some algorithms will run faster or require less space than others, in other words they will consume less resources. Determining which algorithm is better is not always straightforward though, as usually some algorithms take less space but require a longer time to complete while others take less time but require a lot of memory (space) to execute. The amount of time and space it takes for an algorithm to complete is referred to as the space and time complexity of the algorithm. Asymptotic analysis is used to determine the time and space complexity of an algorithm.
Algorithms are usually grouped in to different types, some examples include: greedy algorithms, recursive algorithms, dynamic programming, divide and conquer etc. The master theorem is a method used to provide asymptotic analysis of recurrence relations that occur in many divide and conquer algorithms. A divide and conquer algorithm is an algorithm that solves a problem by breaking it up into smaller sub-problems first, then solves each subproblem individually before combining the results in to the solution for the main larger problem.
Recurrence relations
A recurrence relation in mathematics is an equation that expresses the nth term of a sequence as a function of the k preceding terms, for some fixed k (independent of n). This implies that once the preceeding terms (k terms) are given, the next term in the sequence can be calculated.
In the asymptotic analysis of divide and conquer algorithms, recurrence relations do sometimes occur.
Master theorem
There are several methods of solving recurrence relations which include: recurrence tree method, substitution method but in this article we will be focusing on the master method. It is used to solve equations of the form:
\( T(n) = aT(\frac{n}{b}) + f(n) \)
where \( n = \textrm{size of the input}\),
a = number of subproblems,
\(\frac{n}{b}\) = the size of each subproblem,
f(n) = the work done aside the recursive call
and \(a \geq 1 \) and \(b > 1 \) and f(n) is asymptotically positive
The master method has 3 rules:
-
If \(f(n) = O(n^{log_b (a) - \epsilon})\),
then \(T(n) = \Theta (n^{log_b (a)})\) -
If \(f(n) = \Theta (n^{log_b (a)}) \),
then \(T(n) = \Theta (n^{log_b (a)} log(n)) \) -
If \( f(n) = \Omega (n^{log_b (a) + \epsilon})\),
then \(T(n) = \Theta (f(n))\)
We will use some examples to show how the master theorem works.
-
In our first example, we will be using is the merge sort algorithm. Its runtime produces the following formula:
\(T(n) = 2T(\frac{n}{2}) + n \)
\( a = 2, b = 2, f(n) = n \)
\( f(n) = n = n^{log_b (a)} = n^{log_2 (2)} = n\).This translates to case 2
\( T(n) = \Theta (n^{log_b (a)} log (n)) = \Theta (n log n) \) -
Our next example will look at the binary search algorithm.
\(T(n) = T(\frac{n}{2}) + O(1) \)
\( a = 1, b = 2, f(n) = 1 \)
\( f(n) = 1 = n^{log_b (a)} = n^{log_2 (1)} = n^{0} = 1\).This translates to case 2
\( T(n) = \Theta (n^{log_b (a)} log (n)) = \Theta (log n) \) -
We can also apply the master method for the following recurrence relation
\( T(n) = 8T(\frac{n}{4}) + cn^{\frac{3}{2}} \)
\( a = 8, b = 4, f(n) = cn^{\frac{3}{2}} \)
\( f(n) = cn^{\frac{3}{2}} = n^{log_b (a)} = n^{log_4 (8)} = n^{\frac{3}{2}}\).This translates to case 2
Therefore \( T(n) = \Theta (n^{\frac{3}{2}} log (n)) \) -
For a recurrence relation of the form:
\( T(n) = T(\frac{n}{2}) + cn \)
\( a = 1, b = 2, f(n) = cn \)
\( f(n) = cn = n^{log_b (a)} = n^{log_2 (1) + \varepsilon} = n^{\varepsilon} \text{ for any } \varepsilon \leq 1\).This translates to case 3
\( ah(\frac{n}{b}) = \frac{cn}{2} = \frac{1}{2}h(n) \),
Therefore \( T(n) = \Theta (cn) \) -
For the last example we examine the following recurrence relation:
\( T(n) = 8T(\frac{n}{2}) + cn^{2} \)
\( a = 8, b = 2, f(n) = cn^{2} \)
\( f(n) = cn^{2} = n^{log_b (a)} = n^{log_2 (8) - \varepsilon} = n^{3 - \varepsilon} \text{ for any } \varepsilon \leq 1\).This translates to case 1
\( af(\frac{n}{b}) = \frac{cn}{2} = \frac{1}{2}f(n) \),
Therefore \( T(n) = n^{3} \)
Master theroem limitations
The master theorem is an easy and straight forward way of solving recurrence relations but unfortunately it has some limitations i.e there are some recurrence relations which cannot be solved using the master method. Some examples of recurrence relations which cannot be solved using the master method include:
-
Recurrence relations where f(n) is not a polynomial. For example, when \(f(n) = 2^{n} \)
-
Recurrence relations where T(n) is not a monotone. Example, when \(T(n) = sin (n) \)
-
Recurrence relations where a is not a constant. Example, if \( a = 2n \)
In situations like this the other alternatives like the recurrence tree method and substitution method can be used.