Saturday, May 2, 2009

Quicksort algorithm

This is the fastest non-destructive sort known and given how sorts must work, this is likely to be the fastest sort ever.
// sort everything inbetween `low' and `high'
void quicksort(int arr[], int low, int high)
{
   int i = low;
   int j = high;
   int y = 0;

   int z = arr[(low + high) / 2];// comparison value

   do // partition
   {
       while(arr[i] < z) i++; // find high element ... 
       while(arr[j] > z) j--; // find low element ... 

       if(i <= j) // swap two elements 
       {
           y = arr[i];
           arr[i] = arr[j];
           arr[j] = y;
           i++;
           j--;
       }
   } while(i <= j);

   // recurse to smaller subsets until everything is sorted
   if(low < j)
   { quicksort(arr, low, j); }

   if(i > high)
   { quicksort(arr, i, high); }
}

1 comment:

Unknown said...

You know this is not the fastest sort known. This is the fastest none destructive sort known. Radix sort is far faster then quick sort but has the limitation that it destroys your list.

Radix sort, or bucket sort, can sort a list in O(n) time by simply counting all the elements in a list then generating a new list in sorted order over top of the old one. For this sort works by with a finite number of elements of discrete elements to sort, ie a known range of integer values.
Then when you create your buckets for each key you can simply push all elements into that bucket and then build a new list with those elements a the correct location based on what bucket it's in.

In general it depends on what is known about the data and how it is stored to know what kind of sorting algorithm to use. For example, bubble sort beats quicksort for N < approx 10000, if I recall correctly, and for most partially sorted lists as well. In practice I've seen more bubble sorts implemented then quicksorts for just this reason because the O(n*n) cost is amortized over several operations and it is rare that we get a large set of randomly ordered numbers needed to be sorted.