[Home]Three Laws Of Robotics

HomePage | Recent Changes | Preferences

Showing revision 13
A set of three laws written by Isaac Asimov, which Robots, appearing in his novels, have to obey:

1. A robot may not harm a human being, or allow a human being to be harmed
2. A robot must obey the orders given by the human beings, except where such orders would conflict with the First Law.
3. A robot must protect his own existence, as long as such protection does not conflict the First or Second Law.

A later 'Zeroth Law' was added by Daneel R. Olivaw in [Robots and Empire]? reading

0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
A condition stating that the Zeroth Law must not be broken was added to the original Laws.

The Three Laws are often used in Science Fiction novels written by other authors, but tradition dictates that only Dr. Asimov would ever quote the Laws, others would refer to them.

These laws are seen as a future ideal by those working in Artificial Intelligence - once an intelligence has reached the stage where it can comprehend these laws, it is truly intelligent. See Turing Test.


Source: Isaac Asimov's Foundation's Edge

HomePage | Recent Changes | Preferences
This page is read-only | View other revisions | View current revision
Edited August 1, 2001 2:14 pm by Phoenix (diff)
Search: