Here's a terrible implementation of mergesort in Python:
|
This might seem to be of historical interest only, but on modern computers, [locality of reference]? is of paramount importance in software optimization, because we have deep [memory hierarchies]?. This might change if fast memory becomes very cheap again, or if exotic architectures like the [Tera MTA]? become commonplace. |
This might seem to be of historical interest only, but on modern computers, [locality of reference]? is of paramount importance in software optimization, because we have deep [memory hierarchies]?. This might change if fast memory becomes very cheap again, or if exotic architectures like the [Tera MTA]? become commonplace. |
Conceptually, merge sort works as follows:
If the list to be sorted is longer than one item:
Here's a terrible implementation of mergesort in Python:
def merge(array, start1, end1, start2, end2, output, outstart, cmp): """Merge two sorted sequences into a new sorted sequence. Takes two sorted sequences 'array[start1:end1]' and 'array[start2:end2]' and merges them into a new sorted sequence, which it places in the array 'output', starting at 'outstart'. """ while start1 != end1 or start2 != end2: if start2 == end2 or (start1 != end1 and not cmp(array[start1], array[start2])): output[outstart] = array[start1] start1 = start1 + 1 else: output[outstart] = array[start2] start2 = start2 + 1 outstart = outstart + 1 def mergesort(array, cmp=lambda x, y: x > y, scratch=None, start=None, end=None): """The fastest stable sort for large data sets.""" if scratch is None: scratch = [0] * len(array) if start is None: start = 0 if end is None: end = len(array) if end - start > 1: middle = (start + end) / 2 mergesort(array, cmp, scratch, start, middle) mergesort(array, cmp, scratch, middle, end) merge(array, start, middle, middle, end, scratch, start, cmp) array[start:end] = scratch[start:end]
Merge sort has an average? and [worst-case performance]? of O(n log(n)). This means that it often needs to make fewer comparisons than quicksort. However, the algorithm's overhead is slightly higher than quicksort's, and, depending on the data structure to be sorted, may take more memory (though this is becoming less and less of a consideration). It is also much more efficient than quicksort if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such as LISP where sequentially accessed data structures are very common. Merge sort is a stable sort.
Mergesort is so sequential that it's practical to run it on tapes if you have four tape drives. It works as follows:
On tape drives that can run both backwards and forwards, you can run merge passes in both directions, avoiding any time rewinding.
This might seem to be of historical interest only, but on modern computers, [locality of reference]? is of paramount importance in software optimization, because we have deep [memory hierarchies]?. This might change if fast memory becomes very cheap again, or if exotic architectures like the [Tera MTA]? become commonplace.