[Home]Cantors Diagonal argument

HomePage | Recent Changes | Preferences

A logical argument devised by Georg Cantor to demonstrate that the real numbers are not countably infinite. (It is also called the diagonalization argument or the diagonal slash argument.) It does this by showing that already the interval (0,1), i.e., the real numbers larger than 0 and smaller than 1, is not countably infinite. It proceeds as follows:

       r1 = 0 . 0 1 0 5 1 1 0 ... 
       r2 = 0 . 4 1 3 2 0 4 3 ...
       r3 = 0 . 8 2 4 5 0 2 6 ... 
       r4 = 0 . 2 3 3 0 1 2 6 ...
       r5 = 0 . 4 1 0 7 2 4 6 ... 
       r6 = 0 . 9 9 3 7 8 3 8 ...
       r7 = 0 . 0 1 0 5 1 3 0 ... 
       ...

The digits we will consider are indicated in bold. From these digits we define the digits of x as follows.
For the example above this will result in the following decimal expansion.

        x = 0 . 1 0 0 1 0 0 1 ...

The number x is clearly a real number between 0 and 1.

Note: Strictly speaking, this argument only shows that the number of decimal expansions of real numbers between 0 and 1 is not countably infinite. But since there are expansions such as 0.01999... and 0.02000... that represent the same real number, this does not immediately imply that the corresponding set of real numbers is also not countably infinite. This can be remedied by disallowing the decimal expansions that end with an infinite series of 9's. In that case it can be shown that for every real number there is a unique corresponding decimal expansion. It is easy to see that the proof then still works because the number x contains only 1's and 0's in its decimal expansion.

The diagonal argument is an example of reductio ad absurdum because it proves a certain proposition (the interval (0,1) is not countably infinite) by showing that the assumption of its negation leads to a contradiction.

A generalized form of the diagonal argument was used by Cantor to show that for every set S the power set of S, i.e., the set of all subsets of S (here written as P(S)), is larger than S itself. This proof proceeds as follows:

T = { s in S | s is not in f(s) }
Since in both cases f(s) is not equal to T it follows that f(s) is never equal to T.

Note the similarity between the construction of T and the set in Russell's paradox. Its result can be used to show that the notion of the [set of all sets]? is an inconsistent notion in normal set theory; if S would be the set of all sets then P(S) would at the same time be bigger than S and a subset of S.


/Talk

HomePage | Recent Changes | Preferences
This page is read-only | View other revisions
Last edited October 6, 2001 5:48 am by AxelBoldt (diff)
Search: