[Home]Lossless data compression

HomePage | Recent Changes | Preferences

Showing revision 11
Lossless data compression is a type of data compression algorithm structured in such a way that the original data may be reconstructed exactly from the compressed data. One of the most used algorithms is Huffman coding. Lossless data compression is used in software compression tools such as the highly popular zip format, used by PKZIP and Winzip, and the Unix programs gzip and compress. Lossless compression is used when every byte of the data is important, such as executable programs and source code. Some image file formats, notably PNG, use only lossless compression, while others like TIFF? may use either lossless or lossy methods. GIF uses a technically lossless compression method, but it is incapable of representing full color, so images must be quantized? (often with dithering?) to a small number of colors (a very lossy process) before encoding as GIF.

Lossless data compression does not always work

Lossless data compression algorithms cannot guarantee to compress (that is make smaller) all input data sets. In other words for any (lossless) data compression algorithm there will be an input data set that does not get smaller when processed by the algorithm. This is easily proven with elementary mathematics using a counting argument, as follows:

Notice that the difference in size is so marked that it makes no difference if we simply consider files of length exactly N as the input set: it is still larger (2N members) than the desired output set.

If we make all the files a multiple of 8 bits long (as in standard computer files) there are even fewer files in the smaller subset, and this argument still holds.


See also: Lossy data compression -- David A. Huffman

HomePage | Recent Changes | Preferences
This page is read-only | View other revisions | View current revision
Edited December 7, 2001 11:34 pm by The Anome (diff)
Search: