[Home]Three Laws Of Robotics

HomePage | Recent Changes | Preferences

A set of three laws written by Isaac Asimov, which Robots, appearing in his novels, have to obey:

1. A robot may not harm a human being, or allow a human being to be harmed
2. A robot must obey the orders given by the human beings, except where such orders would conflict with the First Law.
3. A robot must protect his own existence, as long as such protection does not conflict the First or Second Law.

A later 'Zeroth Law' was added by [R. Daneel Olivaw]? in [Robots and Empire]? reading

0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
A condition stating that the Zeroth Law must not be broken was added to the original Laws.

The Three Laws are often used in Science Fiction novels written by other authors, but tradition dictates that only Dr. Asimov would ever quote the Laws, others would refer to them.

Some amateur roboticists have evidently believed that the laws have a status akin to the laws of physics, that a situation which violates these laws is inherently impossible. This is incorrect, as the laws are quite deliberately hardwired into the positronic brains of Asimov's robots. The robots in Asimov's stories are incapable of violating the Three Laws, but there is nothing to stop any robot in other stories or in the real world from being constructed without them. Indeed, the problems of perception and rational analysis involved make it seem likely that only an extremely advanced artificial intelligence or robot could apply the laws in real-world situations.

The Three Laws are seen as a future ideal by those working in artificial intelligence - once an intelligence has reached the stage where it can comprehend these laws, it is truly intelligent. See Turing Test.


Source: Isaac Asimov's Foundation's Edge

HomePage | Recent Changes | Preferences
This page is read-only | View other revisions
Last edited October 16, 2001 11:23 pm by WojPob (diff)
Search: