Mathematics for Analyzing Algorithms
Do not miss this exclusive book on Binary Tree Problems. Get it now for free.
In this article, we are going to explore about mathematics involved in analyzing algorithms and calculate Time Complexity. Before we jump into our main article, first we will know about what is algorithm. Basically an algorithm is nothing but some steps used to perform a specific task. In computer science we say that algorithms are steps which we need to perform to complete a particular task.
The important thing is that we need to be very careful while analyzing algorithms. Algorithms need to be analyzed because it helps us to find whether our code or program is efficient or not. There is a lot of mathematics used while analyzing algorithms. So this article revolves around this. We will be seeing basic mathematical concepts and Big Oh notation, asymptotic analysis etc. So let's get started.
Table of contents
- Floor and Ceil functions
- Powers of 2
- Log and Exponent
- Arithmetic and Geometric Progressions
- Factorial
- Permutations and Combinations
- Catalan Number
- Analyzing Algorithms
- Asymptotic Analysis
Firstly, we will see some basic mathematical formulas or concepts used in analyzing algorithms.
Floor and Ceil functions
- We will start with studying about floor and ceil functions.
Floor function returns the largest integer less than or equal to a number. Example:
If a number is 2.6 so floor(2.6) returns 2.
Ceil function returns the largest integer greater than or equal to a number. Example:
If a number is 18.4 so floor(2.6) returns 19.
Powers of 2
- Now let's move to study about powers of 2. When we talk about computer science we first come up with binary numbers and from there the concept of powers of 2 arises. Like when we want to convert a binary number into a decimal number we have to use powers of 2 concept. There are various applications in which we have to take the help of power of 2 especially in bit manipulation.
Remembering the values of power of 2 for the initial values help in quick Programming related calculations:
- 20 = 1
- 21 = 2
- 22 = 4
- 23 = 8
- 24 = 16
- 25 = 32
- 26 = 64
- 27 = 128
- 28 = 256
- 29 = 512
- 210 = 1024 ~ 1000 = 103
- 211 = 2048
- 212 = 4096
Log and Exponent
- Now let's explore about logs and exponents. We all know about logarithms and exponentiation, their properties. The important thing is that they play a very vital role in analyzing algorithms. We will see how they are used in determining the time or space taken by an algorithm. In computer science, log2x tells us how many bits are needed to hold x values. Like if we see log2256=8, this tells us that 8 bits holds 256 numbers. I know that you are aware of the properties of logs and exponents but it's my duty to revise you all the properties.
Meaning of log:
If logB A = C, then: BC = A
where
- B is the base of logarithm. In most calculations, B is not mentioned. By default in Programming, B is assumed to be 2. In Mathematics, B is equal to e (Euler's constant) by default.
- Note, in specific cases of analyzing algorithms, you may need to consider other values of B. Consider this case of 3 way Partition Quick Sort.
LOGARITHMIC PROPERTIES:
- log A/B = log A - log B
- log AB = log A + log B
- log AB = B log A
log x graph grows slowly as compared to y=x (linear function).
EXPONENTIAL PROPERTIES:
- AB+C = AB * AC
- B = AlogAB
- ABC = (AB)C
exp(x) graph grows very fastly as compared to y=x (linear function) or logarithmic function.
Arithmetic and Geometric Progressions
- Let us understand about progressions. There are three standard progressions in mathematics namely, arithmetic progression,geometric progression and harmonic progression. Most commonly arithmetic progressions are used in analyzing algorithms when we are supposed to find the time complexity of any algorithm.
ARITHMETIC PROGRESSION is a type of progression or series of numbers in which the difference between two adjacent elements is equal to the difference between any two adjacent elements. Example: 1,2,3,4,5,6 is an AP because difference between every adjacent element is 1. While analyzing algorithms, sometimes we come up with this type of series and we have to calculate the sum of this series. Most of the time we have to use the sum of natural numbers formula which comes from sum of AP.
1 + 2 + 3 + 4 + ... + n = n*(n+1)/2
Sum of AP = (n/2) * (2a+(n-1)d)
where:
- n is total numbers in sequence
- a is the first term
- d is the common difference.
This will help us when we are supposed to find sum of squares or cubes.
10 + 20 + ... + n0 = N
11 + 21 + ... + n1 = N * (N+1) / 2
12 + 22 + ... + n2 = (N * (N+1) * (2 * N+1))/6
13 + 23 + ... + n3 = (N2 * (N+1)2) / 4
GEOMETRIC PROGRESSION is a type of series in which there is common ratio between adjacent elements. Example: 2,4,8,16... is a GP as common ratio i.e 4/2 = 8/4 = 16/8 =2
Sum of GP = (1-an+1)/(1-a)
where a is the common ratio.
Like sum of n natural numbers we can also define a formula for summation of squares of n natural numbers and also summation of cubes of n natural numbers.
These quick sequences will help you in analyzing algorithms:
A0 + A1 + ... + AN-1 = AN - 1
20 + 21 + ... + 2N-1 = 2N - 1
Factorial
A factorial is the multiplication of all numbers from 1 to N. Factorial of N is denoted as N!.
N! = 1 * 2 * ... * (N-1) * N
Factorial of 0 is assumed to be 1.
0! = 1
1! = 1
Permutations and Combinations
- The another mathematical concept which is most commonly used while analyzing an algorithm or while solving a particular problem is about Permutations and Combinations.
Before understanding permutations and combinations we need to know about factorial. When we write n! what does this mean? It basically shows us that in how many ways we can arrange a number of size n. Let us say we have a number 234 so its total ways we can arrange this number is n! like we can have 243,324,342,432,423,234. There are total 6 ways possible to arrange this number and if we perform n! or 3! it comes out to be 6.
Permutations: It is basically arrangement of some given number of elements taken one at a time or whole.
Example: If we are supposed to find arrangements of r elements from a total of n elements, then we have the formula as
nPr = n!/(n-r)!
Combinations: It is selections of given number of elements taken one at a time or whole. The selections are all possible different selections.
Example: If we want to select some r elements from a total of n elements, then we have the formula as
nCr = n!/(r!*(n-r)!)
Number of elements in the sequence:
N/2, N/4, N/8, N/16, ..., 1 = logN number of elements
As a generalized equation:
N/B, N/(B2), N/(B3), N/(B4), ..., 1 = logB N number of elements
Catalan Number
- Catalan Number: This is another mathematical concept. It is a sequence of positive integers where the nth term in the sequence, denoted Cn, is found in the following formula:
Cn = (2 * n)! / ((n+1)! * n!)
Now if we define a sequence by substituting values of n as 1,2,3.. we get,
1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, ...
Catalan numbers are directly related to how many ways we can split an n-gon(pentagon,hexagon,heptagon) into triangles by connecting vertices where no two line segments cross. The number of possibilities is equal to Cn-2.
Example: If we want to find the number of ways to convert a pentagon into triangles so that no two line segments that connect the vertices cross.
A pentagon has five sides, so we need to find C5-2, or C3, by putting n = 3 into the Catalan number formula. We get answer as 5, which means there are 5 ways to convert a pentagon into triangles so that no two line segments that connect the vertices cross.
Analyzing Algorithms:
When we hear analyzing algorithm the first word that comes in our mind is time and space complexity.
TIME COMPLEXITY: It is the total time taken by an algorithm to complete a specific task.
SPACE COMPLEXITY: It is the total space or memory used by an algorithm while executing a program.
For finding time and space complexity we will be using asymptotic analysis but before that let us understand what are the various cases while calculating time and space complexity.
- Best case: It can be defined as the minimum amount of time needed by an algorithm to complete a task for any input size 'n'.
- Average case: It can be defined as the average amount of time needed by an algorithm to complete a task for any input size 'n'.
- Worst case: It can be defined as the maximum amount of time needed by an algorithm to complete a task for any input size 'n'.
Asymptotic Analysis
Resources for an algorithm are usually expressed as a function regarding input. If we want to study function growth efficiently we reduce the function as
f(n) = an2+bn+c
And then we see the dominating term which is n2,so we ignore rest of the terms and use this only.
Asymptotic analysis is a technique of representing limiting behavior. It can be used to analyze performance of an algorithm for large data set. Any function is said to be asymptotically equivalent to n2 as n->infinityn and it is symbolically written as f(n) approximately equal to n2.
There are 3 notations that are used to calculate running time of an algorithm. They are:
-
Big-oh notation: It is the formal method of expressing the upper bound of an algorithm's running time. The function f(n)=O(g(n)) only exists if and only if there exist a positive constant c such that f(n)<= c*g(n). Here g(n) is upper bound for function f(n) as g(n) grows faster than f(n).
-
Omega notation: It is the formal method of expressing the lower bound of an algorithm's running time. It is the exact opposite of Big-oh notation. The function f(n)=Ω(g(n)) only exists if and only if there exist a positive constant c such that f(n)>= c*g(n). Here g(n) is lower bound for function f(n) as g(n) grows slowly than f(n).
-
Theta notation: It is the formal method of expressing the tight bound of an algorithm's running time. This notation is the most precise and accurate notation. The function f(n)=Θ(g(n)) exists if and only if there exists two positive constant c1,c2 such that c1g(n)<=f(n)<=c2g(n). This means that f(n) lies between c1g(n) and c2g(n), that's why it is tight bound.
More commonly we represent running time of an algorithm in terms of Big-Oh but it is more accurate if we represent it in theta notation.
Consider a program:
x:=0;
for i=1 to n do
for j=1 to n do
x:=x+1;
Now in this code if we want to find the time taken by it so how do we do. Asymptotic notation and analysis comes in the picture at this time. We can see there are two nested loops iterating and total number of times x is incremented is total number of instructions executed.
So mathematically, 1+2+3+4+...+n = n*(n+1)/2
Now we have to see the dominating term that is n2 in this case. So time complexity will be O(n2).
Let us take an example of a small code snippet of bubble sort and try to understand how to calculate running time.
for i=1 to n-1
for j=1 to n-i-1
if a[j]>a[j+1])
a[j]<->a[j+1] //swapping
In this way we will sort our array element using bubble sort technique.
Analysis: If we have n elements, then in first pass it will do n-1 comparisons, in second pass it will do n-2 and so on. Thus total comparisons will be:
(n-1)+(n-2)+(n-3)+...+1 = n*(n-1)/2 (AP sum as we have seen above)
In this dominating term is n2 so time complexity comes out to be O(n2). Let us analyze some cases.
Best case: When we have an already sorted array then time complexity will be O(n) which is far better than O(n2).
Average case: This happens when we have two or more elements unsorted and in this case time taken will be O(n2).
Worst case: When we have an array which is in descending order, so again time complexity will be O(n2).
Now we have seen some basic mathematical concepts used while analyzing algorithms so I would like to conclude this article at OpenGenus.
Thank you.
Happy reading.
Sign up for FREE 3 months of Amazon Music. YOU MUST NOT MISS.