[Home]Computation

HomePage | Recent Changes | Preferences

The theory of computation, a subfield of computer science and mathematics, is the study of mathematical models of computing, independent of any particular computer hardware. It has its origins early in the twentieth century, before modern electronic computers had been invented. At that time, mathematicians were trying to design a computing machine that would automate the process of computing, in much the same way that the machines of the Industrial Revolution had automated many agricultural and manufacturing processes. An essential step in automating computing is deciding what steps are involved in computing: that is, what kind of memory is available (e.g., a single string, a fixed number of registers storing numbers, or an infinitely long tape storing characters), and what kinds of computational steps can be performed on the data in memory.

Several different computational models were devised by these early researchers. One model, the Turing machine, stores characters on an infinitely long tape, with one square at any given time being scanned by a read/write head. Another model, [recursive functions]?, uses functions and function composition to operate on numbers. The lambda calculus uses a similar approach. Still others, including [Markov algorithms]? and [Post systems]?, use grammar-like rules to operate on strings. All of these formalisms were shown to be equivalent in computational power -- that is, any computation that can be performed with one can be performed with any of the others. They are also equivalent in power to the familiar electronic computer, if one pretends that electronic computers have infinite memory. Indeed, it is widely believed that all "proper" formalizations of the concept of algorithm will be equivalent in power to Turing machines; this is known as the Church-Turing thesis.

The theory of computation studies these models of general computation, along with the limits of computing: Which problems are (provably) unsolvable by a computer? (See the halting problem.) Which problems are solvable by a computer, but require such an enormously long time to compute that the solution is impractical? (See Presburger arithmetic.) Can nondeterminism speed up computation significantly? (See complexity classes P and NP). In general, questions concerning the time or space requirements of given problems are investigated in complexity theory.

In addition to the general computational models, some simpler computational models are useful for special, restricted applications. Regular expressions, for example, are used to specify string patterns in UNIX and in some programming languages such as Perl. Another formalism mathematically equivalent to regular expressions, Finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars are used to specify programming language syntax. [Push down automata]? are another formalism equivalent to context-free grammars. Primitive recursive functions are a naturally defined subclass of the recursive functions.

Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; this leads to the Chomsky hierarchy of languages.

For Further Reading


This article is based on an [article by Nancy Tinkham], originally posted on Nupedia. This article is open content.
/talk

HomePage | Recent Changes | Preferences
This page is read-only | View other revisions
Last edited December 16, 2001 4:26 am by AxelBoldt (diff)
Search: