Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

In this article, we will discuss about the technique Decrease and Conquer which is similar to Divide and Conquer. Decrease and Conquer can be used to solve several important problems such as Fake Coin Problem, Generating Subsets and much more.

What we'll see,

- Decrease and Conquer Technique
- Variations in Divide and Conquer Technique
- The Fake Coin Problem
- The Selection Problem
- Generating Subsets
- Conclusion
- Reference

### Decrease and Conquer Technique

As divide and conquer technique, which includes dividing the problem into smaller sub-problems of the same problem, then conquering the sub-problems by solving them recursively and if the sub-problem sizes are small enough, then the solving them in a straightforward manner and in last combining the solutions to the sub-problems into the solution for the initial big problem.

Simillarly, the approach decrease and conquer works, it also includes steps as, dividing the problem into smaller instances of the same problem, then conquering the problem by solving smaller instance of the problem using recursion and extending the smaller instance to obtain solution to big problem.

But the difference is that, it divides the problem into sub-problem but solves only one of the sub-problems which makes the condition holds true. Thus, decreasing the size of the original problem. This approach is known as decrease and conquer approach and also known as incremental or inductive approach.

### Variations in Decrease and Conquer Technique

There are three major variations of decrease-and-conquer:

- Decrease by a constant
- Decrease by a constant factor
- Variable size decrease

**Decrease by a Constant** : In this variation, the size of an instance is reduced by the same constant on each iteration or the recursive step of the algorithm. Typically, this constant is equal to one , although other constant size reductions can happen. This variation is used in many algorithms like;

- Insertion sort
- Graph search algorithms: DFS, BFS
- Topological sorting
- Algorithms for generating permutations, or subsets

**Decrease by a Constant factor**: This technique reducesa a problem instance by the same constant factor on each iteration or the recursive step of the algorithm. In most applications, this constant factor is equal to two. A reduction by a factor other than two is especially rare. Decrease by a constant factor algorithms are very efficient especially when the factor is greater than 2 as in the fake-coin problem. Examples where this approach is used;

- Binary search
- Fake-coin problems
- Russian peasant multiplication

**Variable-Size-Decrease**: In this variation, the size-reduction pattern varies from one iteration or step of an algorithm to another. As, in problem of finding gcd of two number though the value of the second argument is always smaller on the right-handside than on the left-hand side, it decreases neither by a constant nor by a constant factor. Below are some examples which uses this approach:

- Computing median and selection problem
- Interpolation Search
- Euclid’s algorithm

### The Fake Coin Problem

** Problem **

You are given n number of identical looking coins. They are of same weights but one coin is a fake coin which is made of a lighter metal.

There is an old-fashioned balance scale machince that enables you to compare any two sets of coins. If it tips either to the left or to the right, you will know that one of the two sets is heavier than the other. But, the machine charges you each time you weigh anything.

Now, your task is to design an algorithm to find the fake coin in the fewest number of weighings.

** Approach and Solution **

Since we have discussed the concept of decrease and conquer, so it is clear that decrease and conquer approach works here.

You may well have realized that you can divide the pile in half, weigh the halves, and narrow your focus to the pile that is lighter. This approach sounds a lot like the binary search method.

Binary search divides the problem to two sub-problems to solve. The binary search divides but it solves only one of the sub-problems and discards the other, thus, it is considered as decrease and conquer approach rather than the divide and conquer approach. This binary search approach works for finding the fake coin.

Binary search divides the problem by the factor of 2, but we can do better to solve the fake coin problem.

Suppose we divide the coins into three piles, where at least two of them contain the same number of coins. After weighing the equal-sized piles, we can eliminate 2/3^{rd} of the coins.

If n mod 3 = 0, we can divide the coinds into three piles of exactly n/3 coins.

If n mod 3 = 1, then n = 3k + 1 for some k. We can divide the coins into three piles of k, k, and k+1. It will simplify our algorithm, though, if we split them into three piles of k+1, k+1, and k-1.

And if n mod 3 = 2, then n = 3k + 2 for some k. We can divide the coins into three piles of k+1, k+1, and k.

Designing the alogrithm for the above approach,

```
INPUT : integer n
if n = 1 then
the coin is fake
else
divide the coins into piles of A = ceiling(n/3), B = ceiling(n/3),
and C = n-2*ceiling(n/3)
weigh A and B
if the scale balances then
iterate with C
else
iterate with the lighter of A and B
```

Python implementation of the above algorithm,

```
from math import ceil
n = int(input())
weights = [int(coin) for coin in input().split()]
def find_fake_coin(n, weights):
if n == 1:
return weights[0]
else:
A = weights[:ceil(n/3)]
B = weights[ceil(n/3):n-2*ceil(n/3)]
C = weights[n-2*ceil(n/3):]
if A == B:
find_fake_coin(len(C), C)
elif A < B:
find_fake_coin(len(A), A)
else:
find_fake_coin(len(B), B)
```

### The Selection Problem

** Problem **

Find the i^{th} smallest element in a a set of n unsorted elements. This problem is referred to as the selection problem or the i^{th} "order statistic".

If i=1, this is finding the minimum element of a set.

If i=n, this is finding the maximum element of a set.

If i=n/2, this is finding the median or halfway point element of a set.

INPUT: A set of n numbers and a number i, with 1 <= i <= n.

OUPUT: The element x in the given set of n numbers that is larger than exactly i-1 other elements in that set.

** Approach and Solution **

We can use sorting algorithm like Merge sort to sort the array and then return the i^{th} element from the start of the array. This approach will complete our job with O(nlog(n)) time complexity.

Another approach can be, that we can divide the set of n numbers into two groups, one which contains elements less than the pivot value and one which contains elements greater than the pivot value, where the pivot value is a an element taken from the array. This implies, we will have,

S_{1}: elements <p

S_{2}: elements >p

, where p refers to the pivot value.

Note that the elements in S_{1} are not sorted, but all of them are smaller than element p. We know that p is the (|S_{1}| + 1)^{th} smallest element of n. |S| represents the size of S. This is the same idea used in the quicksort algorithm.

Designing the algorithm to find the i^{th} smallest element from given array:-

- Selecting a pivot oint, p, out of the array.
- Splitting the array into S
_{1}and S_{2}, where all elements in S_{1}are less than p and all elements in S_{2}are greater than p. - If i = |S
_{i}| + 1, then p is the i^{th}smallest element.

• Otherwise, if i ≤ |S1| then the i^{th}smallest element is somewhere in S_{1}. Repeat the

process recursively on S_{1}looking for the i^{th}smallest element.

• Or if i is somewhere in S2. Repeat the process recursively looking for the

i-|S1|-1 smallest element.

The question arrises on the selection of p (pivot value). We can take p as the value closest to the median of the whole array. If p is the largest element or the smallest, the problem size is only reduced by 1. We can pick p using one of the following:-

• Always pick the element for pivot from, n_{th} or 1_{st}.

• Pick a random element for pivot.

• Pick 3 random elements, and then pick the median of those 3 elements as pivot.

Once we have p it is fairly easy to partition the elements.

**Pseudo-code for partitioning**

```
// Partitions array A[p..r]
Partition(A,p,r)
// Choose first element as partition element
x ← A[p]
i ← p-1
j ← r+1
while true
do repeat
j←j-1 until
A[j]≤ x repeat
i←i+1 until A[i] ≥ x
if i<j
then exchange A[i] ↔ A[j]
else
// indicates index of partitions
return j
```

### Generating Subsets

** Problem **

Design an algorithm to generate the subsets of a given set.

** Solution **

Algorithm for building subsets of the array,

- Choose one element from input i.e. subset[len] = S[pos]. We can decide to include it in current subset or not.
- Recursively form subset including it i.e. allSubsets(pos+1, len+1, subset)
- Recursively form subset excluding it i.e. allSubsets(pos+1, len, subset)
- And most importantly for efficiency, making sure to generate each set once.

Pseudo-code for the above approach,

```
int S[N]
void allSubsets(int pos, int len, int[] subset)
{
if(pos == N)
{
print(subset)
return
}
// Try the current element in the subset.
subset[len] = S[pos]
allSubsets(pos+1, len+1, subset)
// Skip the current element.
allSubsets(pos+1, len, subset)
}
```

### Conclusion

In this article, we have discussed the decrease and conquer technique, with the help of various examples, such as The Generating Subsets Problem, The Fake Coin Problem and The Selection Problem.