×

Search anything:

Different Hybrid Sorting Algorithms

Binary Tree book by OpenGenus

Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

In this article, we will discuss about various Hybrid sorting algorithms such as Tim Sort Algorithm. Hybrid Sorting Algorithms tend to perform better than pure Sorting Algorithms and hence, are widely used in practice.

What we will see:-

Hybrid Algorithms

A hybrid algorithm is an algorithm that is a combination of two or more algorithms that helps in solving the same problem. It is an algorithm which either chooses one algorithm, or switches to another over the course of the algorithm depending on the data. A hybrid algorithm is generally designed to combine desired features of each different algorithms, so that the overall algorithm is better than the individual components. This feature of a hybrid algorithm increases the performance of a task tremendously.

Hybrid Sorting Algorithms

Sorting is a task used in many algorithms as a sub problem which requires an optimal solution. There are many sorting algorithms, to combine the good parts of all these algorithms, hybrid sorting algorithms are implemented. Various sorting algorithms like quick sort, heap sort, merge sort, etc fails to give optimal performance in some cases. Hybrid sorting algorithm switches from one to another algorithm which performs best for that case depending on the data. Intro sort, Tim sort are some examples of such hybrid algorithms used for sorting task.

The different types of Hybrid Sorting Algorithms are:

  • Tim Sort Algorithm
  • Intro Sort Algorithm
  • Spread Sort Algorithm
  • Block Sort Algorithm
  • Kirkpatrickā€“Reisch sort Algorithm
  • Merge-insertion sort Algorithm

1. Tim Sort Algorithm

Tim sort is a hybrid stable sorting algorihtm, a combination of merge sort and insertion sort, designed to perform well on the real-world data. It is used in sort function of Python, Java, Swift and Rust programming languages. The algorithm finds subsequences of the data that are algready ordered and uses them to sort the remaining more efficiently, done by merging runs until certain criteria are fulfilled.

Time Complexity of Tim sort
The best-case time complexity of Tim sort is O(n), while for average and worst case configuration, it is O(nlog(n)).

Space Complexity of Tim sort
Space complexity of Tim sort is O(n) for the worst-case configuration and uses constant space for average and best case configuration.

Implementation of Tim sort in C++

#include<bits/stdc++.h> 
using namespace std; 
const int RUN = 32; 
  
// this function sorts array from left index to 
// to right index which is of size atmost RUN 
void insertionSort(int arr[], int left, int right) 
{ 
    for (int i = left + 1; i <= right; i++) 
    { 
        int temp = arr[i]; 
        int j = i - 1; 
        while (arr[j] > temp && j >= left) 
        { 
            arr[j+1] = arr[j]; 
            j--; 
        } 
        arr[j+1] = temp; 
    } 
} 
  
// merge function merges the sorted runs 
void merge(int arr[], int l, int m, int r) 
{ 
    // original array is broken in two parts 
    // left and right array 
    int len1 = m - l + 1, len2 = r - m; 
    int left[len1], right[len2]; 
    for (int i = 0; i < len1; i++) 
        left[i] = arr[l + i]; 
    for (int i = 0; i < len2; i++) 
        right[i] = arr[m + 1 + i]; 
  
    int i = 0; 
    int j = 0; 
    int k = l; 
  
    // after comparing, we merge those two array 
    // in larger sub array 
    while (i < len1 && j < len2) 
    { 
        if (left[i] <= right[j]) 
        { 
            arr[k] = left[i]; 
            i++; 
        } 
        else
        { 
            arr[k] = right[j]; 
            j++; 
        } 
        k++; 
    } 
  
    // copy remaining elements of left, if any 
    while (i < len1) 
    { 
        arr[k] = left[i]; 
        k++; 
        i++; 
    } 
  
    // copy remaining element of right, if any 
    while (j < len2) 
    { 
        arr[k] = right[j]; 
        k++; 
        j++; 
    } 
} 
  
// iterative Timsort function to sort the 
// array[0...n-1] (similar to merge sort) 
void timSort(int arr[], int n) 
{ 
    // Sort individual subarrays of size RUN 
    for (int i = 0; i < n; i+=RUN) 
        insertionSort(arr, i, min((i+31), (n-1))); 
  
    // start merging from size RUN (or 32). It will merge 
    // to form size 64, then 128, 256 and so on .... 
    for (int size = RUN; size < n; size = 2*size) 
    { 
        // pick starting point of left sub array. We 
        // are going to merge arr[left..left+size-1] 
        // and arr[left+size, left+2*size-1] 
        // After every merge, we increase left by 2*size 
        for (int left = 0; left < n; left += 2*size) 
        { 
            // find ending point of left sub array 
            // mid+1 is starting point of right sub array 
            int mid = left + size - 1; 
            int right = min((left + 2*size - 1), (n-1)); 
  
            // merge sub array arr[left.....mid] & 
            // arr[mid+1....right] 
            merge(arr, left, mid, right); 
        } 
    } 
} 
  
// utility function to print the Array 
void printArray(int arr[], int n) 
{ 
    for (int i = 0; i < n; i++) 
        printf("%d  ", arr[i]); 
    printf("\n"); 
} 

2. Intro Sort Algorithm

Intro or Introspective sort is a hybrid sorting algorithm that provide fast performance. It is a comparison based sorting algorithm based on three phases. It uses the quicksort sort along with heapsort and insertion-sort algorithms. Intro sort algorithm is used in sort function of C++ STL library. To know more about Introspective sort or need an in-depth explaination check out Intro sort.

Time Complexity of Intro sort
The worst-case computing time for INTROSORT is O(nlog(n)), improving the worst case time complexity of quick sort with O(n2). The time complexity of average and best configurations is O(nlog(n)).

Space Complexity of Intro sort
Introspective sort algorithm uses O(log(n)) space.

Pseudo code for Intro sort

INTROSORT(A, f, b)
    Inputs:-
        A : data structure containing data in positions A[f], ..., A[b-1]
        f : the first position of the sequence
        b : the first position beyond the end of the sequence
    INTROSORT_LOOP(A, f, b, 2*floor(log(b-f))
    INSERTION_SORT(A, f, b)
    
INTROSORT_LOOP(A, f, b, depth_limit : a non-negative integer)
    while b-f > size_threshold
        do if depth_limit = 0
            return HEAPSORT(A, f, b)
        depth_limit ā† depth_limit - 1
        p ā† PARTITION(A, f, b, MEDIAN_OF_3(A[f], A[f+(b-f)/2], A[b-1]))
        INTROSORT_LOOP(A, p, b, depth_limit)
        b ā† p

3. Spread Sort Algorithm

Spreadsort is a sorting algorithm which combines the concepts from distribution-based sorts, such as radix sort and bucket sort, with partitioning concepts from comparison sorts such as quicksort and mergesort. In experimental results it was shown to be highly efficient, often outperforming traditional algorithms such as quicksort, particularly on distributions exhibiting structure and string sorting.

Time Complexity of spread sort
The worst-case performance of spreadsort is O(nlog(n)) for small data sets, as it uses introsort as a fallback. In the case of distributions where the size of the key in bits K times 2 is roughly the square of the log of the list size n or smaller (2k < (log(n))2), it does bettter in the worst case, achieving O(nāˆšk - log(n)).

Space Complexity of Spread sort
It uses asymptotically more space than the O(log(n)) overhead of quicksort or the O(1) overhead of heaphsort, it uses considerably leass space than the basic form of mergesort, which uses auxiliary space equal to the space occupied by the list.

Implementation for Spread sort

unsigned 
RoughLog2(DATATYPE input) 
{
	unsigned char cResult = 0;
	// The && is necessary on some compilers to avoid infinite loops; it doesn't
	// significantly impair performance
	if (input >= 0)
		while ((input >> cResult) && (cResult < DATA_SIZE)) cResult++;
	else
		while (((input >> cResult) < -1) && (cResult < DATA_SIZE)) cResult++;
	return cResult;
}
SIZETYPE
GetMaxCount(unsigned logRange, unsigned uCount)
{
	unsigned logSize = RoughLog2Size(uCount);
	unsigned uRelativeWidth = (LOG_CONST * logRange)/((logSize > MAX_SPLITS) ? MAX_SPLITS : logSize);
	// Don't try to bitshift more than the size of an element
	if (DATA_SIZE <= uRelativeWidth)
		uRelativeWidth = DATA_SIZE - 1;
	return 1 << ((uRelativeWidth < (LOG_MEAN_BIN_SIZE + LOG_MIN_SPLIT_COUNT)) ? 
		(LOG_MEAN_BIN_SIZE + LOG_MIN_SPLIT_COUNT) :  uRelativeWidth);
}

void 
FindExtremes(DATATYPE *Array, SIZETYPE uCount, DATATYPE & piMax, DATATYPE & piMin)
{
	SIZETYPE u;
	piMin = piMax = Array[0];
	for (u = 1; u < uCount; ++u) {
		if (Array[u] > piMax)
			piMax = Array[u];
		else if (Array[u] < piMin)
			piMin = Array[u];
	}
}	

//---------------------SpreadSort Source-----------------

Bin *
SpreadSortCore(DATATYPE *Array, SIZETYPE uCount, SIZETYPE & uBinCount, DATATYPE &iMax, DATATYPE &iMin)
{
	// This step is roughly 10% of runtime but it helps avoid worst-case
	// behavior and improves behavior with real data.  If you know the
	// maximum and minimum ahead of time, you can pass those values in
	// and skip this step for the first iteration
	FindExtremes((DATATYPE *) Array, uCount, iMax, iMin);
	if (iMax == iMin)
		return NULL;
	DATATYPE divMin,divMax;
	SIZETYPE u;
	int LogDivisor;
	Bin * BinArray;
	Bin* CurrentBin;
	unsigned logRange;
	logRange = RoughLog2Size((SIZETYPE)iMax-iMin);
	if ((LogDivisor = logRange - RoughLog2Size(uCount) + LOG_MEAN_BIN_SIZE) < 0)
		LogDivisor = 0;
	// The below if statement is only necessary on systems with high memory
	// latency relative to processor speed (most modern processors)
	if ((logRange - LogDivisor) > MAX_SPLITS)
		LogDivisor = logRange - MAX_SPLITS;
	divMin = iMin >> LogDivisor;
	divMax = iMax >> LogDivisor;
	uBinCount = divMax - divMin + 1;
	
	// Allocate the bins and determine their sizes
	BinArray = calloc(uBinCount, sizeof(Bin));
	// Memory allocation failure check and clean return with sorted results
	if (!BinArray) {
		printf("Using std::sort because of memory allocation failure\n");
		std::sort(Array, Array + uCount);
		return NULL;
	}
		
	// Calculating the size of each bin; this takes roughly 10% of runtime
	for (u = 0; u < uCount; ++u)
		BinArray[(Array[u] >> LogDivisor) - divMin].uCount++;
	// Assign the bin positions
	BinArray[0].CurrentPosition = (DATATYPE *)Array;
	for (u = 0; u < uBinCount - 1; u++) {
		BinArray[u + 1].CurrentPosition = BinArray[u].CurrentPosition + BinArray[u].uCount;
		BinArray[u].uCount = BinArray[u].CurrentPosition - Array;
	}
	BinArray[u].uCount = BinArray[u].CurrentPosition - Array;
	
	// Swap into place.  This dominates runtime, especially in the swap;
	// std::sort calls are the other main time-user.
	for (u = 0; u < uCount; ++u) {
		for (CurrentBin = BinArray + ((Array[u] >> LogDivisor) - divMin);  (CurrentBin->uCount > u); 
			CurrentBin = BinArray + ((Array[u] >> LogDivisor) - divMin))
				SWAP(Array + u, CurrentBin->CurrentPosition++);
		// Now that we've found the item belonging in this position,
		// increment the bucket count
		if (CurrentBin->CurrentPosition == Array + u)
			++(CurrentBin->CurrentPosition);
	}
	
	// If we've bucketsorted, the array is sorted and we should skip recursion
	if (!LogDivisor) {
		free(BinArray);
		return NULL;
	}
	return BinArray;
}

void
SpreadSortBins(DATATYPE *Array, SIZETYPE uCount, SIZETYPE uBinCount, const DATATYPE &iMax
				, const DATATYPE &iMin, Bin * BinArray, SIZETYPE uMaxCount)
{
	SIZETYPE u;
	for (u = 0; u < uBinCount; u++) {
		SIZETYPE count = (BinArray[u].CurrentPosition - Array) - BinArray[u].uCount;
		// Don't sort unless there are at least two items to compare
		if (count < 2)
			continue;
		if (count < uMaxCount)
			std::sort(Array + BinArray[u].uCount, BinArray[u].CurrentPosition);
		else
			SpreadSortRec(Array + BinArray[u].uCount, count);
	}
	free(BinArray);
}

void 
SpreadSortRec(DATATYPE *Array, SIZETYPE uCount)
{
	if (uCount < 2)
		return;		
	DATATYPE iMax, iMin;
	SIZETYPE uBinCount;
	Bin * BinArray = SpreadSortCore(Array, uCount, uBinCount, iMax, iMin);
	if (!BinArray)
		return;
	SpreadSortBins(Array, uCount, uBinCount, iMax, iMin, BinArray,
	               GetMaxCount(RoughLog2Size((SIZETYPE)iMax-iMin), uCount));
}

4. Block Sort Algorithm

Block sort, or block merge sort, is a sorting algorithm combines two merge operations with an insertion sort to arrive at O(nlog n) in-place stable sorting. It gets its name from the observation that merging two sorted lists, A and B, is equivalent to breaking A into evenly sized blocks, inserting each A block into B under special rules, and merging AB pairs.

Time Complexity of Block sort
The worst-case and average-case performance of block sort algorithm is O(nlog(n)), while the best-case performance is O(n).

Space Complexity of Block sort
The worst-case space complexity of block sort algorithm is O(1).

Overview of Block sort algorithm
The outer loop of block sort is identical to a bottom-up merge sort, where each level of the sort merges pairs of subarrays, A and B, in sizes of 1, then 2, then 4, 8, 16, and so on, until both subarrays combined are the array itself. Rather than merging A and B directly as with traditional methods, a block-based merge algorithm divides A into discrete blocks of size āˆšA (resulting in āˆšA number of blocks as well), inserts each A block into B such that the first value of each A block is less than or equal (ā‰¤) to the B value immediately after it, then locally merges each A block with any B values between it and the next A block. As merges still require a separate buffer large enough to hold the A block to be merged, two areas within the array are reserved for this purpose (known as internal buffers). The first two A blocks are thus modified to contain the first instance of each value within A, with the original contents of those blocks shifted over if necessary. The remaining A blocks are then inserted into B and merged using one of the two buffers as swap space. This process causes the values in that buffer to be rearranged. Once every A and B block of every A and B subarray have been merged for that level of the merge sort, the values in that buffer must be sorted to restore their original order, so an insertion sort must be applied. The values in the buffers are then redistributed to their first sorted position within the array. This process repeats for each level of the outer bottom-up merge sort, at which point the array will have been stably sorted.

4. Kirkpatrickā€“Reisch Sort Algorithm

Kirkpatrickā€“Reisch sorting is a fast sorting algorithm for items with limited-size integer keys. It is notable for having an asymptotic time complexity that is better than radix sort.

Time Complexity of Kirkpatrickā€“Reisch sort
Radix sort algorithm sorts n w-bit integers by splitting them up into chunks of log(n) bits each, and sorting each chunk in linear time. Thus it achieves O(nw/log(n)) time. It achieves time that has an exponentially smaller factor next to n, with asymptotic time complexity of,

O(n + nlog(w/log(n)))

Space Complexity of Kirkpatrickā€“Reisch sort
the algorithm is deterministic, at the cost of using a huge O(2w/2) amount of memory. Instead, it is more practical to combine the idea with universal hashing to get a randomized algorithm with that expected time, and linear space.

Overview of Kirkpatrickā€“Reisch sort algorithm
Suppose we have a list of numbers. We split each number into the top half of bits and bottom half (in our case we will take 4 decimal digits each). Then we add the number to the trie as a length-2 path: at the first level we have the more significant bits, in the leaves we have the least significant bits. Note that when building the trie we have to be able to look up nodes at the first level by value, to avoid duplicating them. This is where hashing (and hence randomization) comes in. In the leaves duplicates are fine. We find the minimum leaf and make it the first child in each subtree. We take all the nodes other than root and the minimum leaves, and sort them recursively. There are n leaves. We added some number of level-1 nodes, but skip the same number of minimum leaves. Thus the recursive sort also sorts n numbers, with half as many bits each. Using the sorted order of nodes computed in the previous step we can reorder all the edges so that they are in sorted order. We simply detach all the children (except the minimum leaves), and then walk all the nodes in sorted order and re-attach them to their original parent. We now simply walk the trie left-to-right and re-combine high bits with low bits to get the final sorted list.

5. Merge-insertion Sort Algorithm

Merge-insetion sort, also known as Ford-Johnson algorihtm is a comparison sorting algorithm. It uses fewer comparisons in the worst case than the binary insertion sort and merge sort. The algorithm is designed to take advantage of the fact that the binary searches used to insert elements are most efficient when the length of the subsequence that is searched is one less than a power of two. This is because, for those lengths, all outcomes of the search use the same number of comparisons as each other.

Time Complexity of Merge-insertion sort
The worst case time complexity is O(NK + Nlog(N/K)), while for best case configuration, it is O(N+Nlog(N/K), where N is the size of the array and the K is the size of subarray at each division.

Space Complexity of Merge-insertion sort
Merge-insertion sort algorithm takes constant(O(1)) space.

Algorithmic steps for Merge-insertion sort
Merge-insertion sort performs the following steps, on an input X of n elements.

  1. Group the elements of X intro floor(n/2) pairs of elements, arbitrarily, leaving one elemnt unpaired if there is an odd number of elements.
  2. Perform floor(n/2) comparisons, one per pair, to determine the larger of the two elements in each pair.
  3. Recursively sort the floor(n/2) larger elements from each pair, creating a sorted sequence S of floor(n/2) of the input elements, in ascending order.
  4. Insert at the start of S the element that was paired with the first and smallest element of S.
  5. Insert the remaining ceil(n/2) - 1 elements of X into S, one at a time, with a specially chosen insertion ordering described below. Use binary search in subsequences of S to determine the position at which each element should be inserted.

The algorithm is designed to take advantage of the fact that the binary searches used to insert elements into S are most efficient when the length of the subsequence that is searched is one less than a power of two as for those lengths, all outcomes of the search use the same number of comparions as each other. To choose an insertion ordering that produces these lengths, consider the sorted sequence S after step 4 of the outline above (before inserting the remaining elements), and let xi, denotes ith element of this sorted sequence. Thus,

S = (x1, x2,...)

where each element xi with i>=3 is paired with an element yi < xi that has not yet been inserted. If n is odd, the remaining unpaired element should also be numbered as yi with i larger than the indexes of the paired elements. Then, the final step of the outline above can be expanded into the following steps:

  • Partition the uninserted elements yi into groups with contiguous indexes. There are two elements y3 and y4 in the first group, and the sums of sizes of every two adjacent groups form a sequence of powers of two. Thus, the sizes of groups are: 2, 2, 6, 10, 22, 42...
  • Order the uninserted elements by their groups, but within each group order them from larger indexes to smaller indexes. Thus, the ordering becomes y4, y3, y6, y5...
  • Use this ordering to insert the elements yi into S. For each element yi, use a binary search from the start of S up to but not including xi, to determine where to insert yi.

With this article at OpenGenus, you must have a strong idea of Hybrid Sorting Algorithms.

Different Hybrid Sorting Algorithms
Share this