[Home]History of Turing Test

HomePage | Recent Changes | Preferences

Revision 23 . . December 16, 2001 7:14 am by Taw [s/primitive/simple/, simple is less loaded word]
Revision 22 . . (edit) December 11, 2001 6:54 am by DavidSaff
Revision 21 . . December 11, 2001 6:51 am by (logged).132.75.xxx [copyedit]
Revision 20 . . December 10, 2001 1:35 pm by (logged).132.75.xxx [mention AOLiza and why it doesn't count as passing the Turing Test]
Revision 19 . . (edit) December 3, 2001 5:20 pm by Saghmos [Added some 'see also' links.]
Revision 18 . . (edit) August 27, 2001 4:08 am by LC
  

Difference (from prior major revision) (no other diffs)

Changed: 16c16
So far, no computer has passed the Turing test as such. Primitive conversational programs such as ELIZA have fooled people into believing they are talking to another human being, such as in an informal experiment termed AOLiza?. However, such "successes" are not the same as a Turing Test. Most obviously, the human party in the conversation has no reason to suspect they are talking to anything other than a human, whereas in a real Turing test the questioner is actively trying to determine the nature of the entity they are chatting with. Documented cases are usually in environments such as [Internet Relay Chat]? where conversation is highly stilted and meaningless comments showing no understanding of the conversation are common. Additionally, many relay chat participannts have English as a second or third language, thus making it even more likely that they assume that a stupid comment by the conversational program is simply something they have misunderstood, and are also probably unfamiliar with the technology of "chat bots" and don't recognize the very non-human errors they make. See ELIZA effect.
So far, no computer has passed the Turing test as such. Simple conversational programs such as ELIZA have fooled people into believing they are talking to another human being, such as in an informal experiment termed AOLiza?. However, such "successes" are not the same as a Turing Test. Most obviously, the human party in the conversation has no reason to suspect they are talking to anything other than a human, whereas in a real Turing test the questioner is actively trying to determine the nature of the entity they are chatting with. Documented cases are usually in environments such as [Internet Relay Chat]? where conversation is highly stilted and meaningless comments showing no understanding of the conversation are common. Additionally, many relay chat participannts have English as a second or third language, thus making it even more likely that they assume that a stupid comment by the conversational program is simply something they have misunderstood, and are also probably unfamiliar with the technology of "chat bots" and don't recognize the very non-human errors they make. See ELIZA effect.

HomePage | Recent Changes | Preferences
Search: