Conceptually, merge sort works as follows:
If the list to be sorted is longer than one item:
Merge sort has an average? and [worst-case performance]? of O(n log(n)). This means that it often needs to make fewer comparisons than quicksort. However, the algorithm's overhead is slightly higher than quicksort's, and, depending on the data structure to be sorted, may take more memory (though this is becoming less and less of a consideration). It is also much more efficient than quicksort if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such as LISP where sequentially accessed data structures are very common. Merge sort is a stable sort.
Mergesort is so sequential that it's practical to run it on tapes if you have four tape drives. It works as follows:
On tape drives that can run both backwards and forwards, you can run merge passes in both directions, avoiding any time rewinding.
This might seem to be of historical interest only, but on modern computers, [locality of reference]? is of paramount importance in software optimization, because we have deep [memory hierarchies]?. This might change if fast memory becomes very cheap again, or if exotic architectures like the [Tera MTA]? become commonplace.