[Home]Quicksort

HomePage | Recent Changes | Preferences

Quicksort is a sort algorithm which, [on average]?, needs O(n log(n)) comparisons to sort n items. It can operate in place, meaning that it does only require a constant amount of additional storage space. Its inner loop is inherently very fast on nearly all computers, which makes it significantly faster than other O(n log n) algorithms that can sort in place in the average case. Quicksort's [worst-case performance]? is O(n2): much worse than some other sorting algorithms such as heapsort or merge sort, although careful optimization can make this bad performance very uncommon. Because of its good average performance and simple implementation, Quicksort is one of the most popular sorting algorithms in use. Quicksort was invented by [C. A. R. Hoare]?.

The Quicksort algorithm uses a recursive [divide and conquer]? strategy to sort a list. The steps are:

  1. pick a pivot element from the list.
  2. reorder the list so that all elements less than the pivot precede all elements greater than the pivot.
  3. recursively sort the sublist of elements less than the pivot and the sublist of elements greater than the pivot. If one of the sublists is empty, it can be ignored.

A simple implementation of Quicksort on an array of integers in C:

void quicksort(int * low, int * high)
{
   /* I naively use the first value in the array as the pivot */
   /* this will not give good performance real usage */
   
   int * lowbound = low + 1;       /* the high boundary of the low subarray */
   int * highbound = high - 1;     /* the low boundary of the high subarray */
   int temp;
   
   while(lowbound < highbound + 1) /* partition the array */
   {
      if(*lowbound < *low)         /* compare to pivot */
        lowbound++;                /* move lowbound toward the middle */
      else
      {
         temp = *lowbound;         /* swap *lowbound and *highbound */
         *lowbound = *highbound;
         *highbound = temp;
         highbound--;              /* move highbound toward the middle */
      }
   }
   
   highbound++;                    /* move bounds back to the correct positions */
   lowbound--;
   
   temp = *low;                    /* move the pivot into the middle */
   *low = *lowbound;
   *lowbound = temp;
   
   if(low != lowbound)             /* recurse on the subarrays */
     quicksort(low, lowbound);
   if(high != highbound)
     quicksort(highbound, high);
}

Here's an implementation in Python:

def partition(array, start, end, cmp):
    while start < end:
        # at the top of this loop, our partition element is at 'start'
        while start < end:
            if cmp(array[start], array[end]):
                (array[start], array[end]) = (array[end], array[start])
                break
            end = end - 1
        # at the top of this loop, our partition element is at 'end'
        while start < end:
            if cmp(array[start], array[end]):
                (array[start], array[end]) = (array[end], array[start])
                break
            start = start + 1
    return start

def quicksort(array, cmp=lambda x, y: x > y, start=None, end=None):
    """The fastest exchange sort for most purposes."""
    if start is None: start = 0
    if end is None: end = len(array)
    if start < end:
        i = partition(array, start, end-1, cmp)
        quicksort(array, cmp, start, i)
        quicksort(array, cmp, i+1, end)

A naive implementation of Quicksort, like the ones above, will be terribly inefficient for certain inputs. (The routine above will take [quadratic time]? if the input is already sorted!) However, the excellent average-case performance of the algorithm has led to extensive work on tuning its performance, and a good quicksort is often the fastest sorting algorithm available.

There are two aspects of quicksort that particularly benefit from tuning. Firstly, the choice of pivot element is critical. Choosing the first, or last, or middle, element leads to horrible worst-case times on certain kinds of (realistic) input data. Using a random element instead works quite well; more sophisticated approaches involve inspecting several elements and choosing one which appears to be "central". (By taking this to extremes, the bad worst-case behaviour can be eliminated completely, but the cost in overhead is high.) Secondly, the inner loop which performs the partitioning can be speeded up considerably by careful micro-optimization.

External link:

[Demonstration and comparison] of Quicksort with Bubble sort, Merge sort and Radix sort implemented in JavaScript


HomePage | Recent Changes | Preferences
This page is read-only | View other revisions
Last edited December 19, 2001 12:12 am by AxelBoldt (diff)
Search: