language-icon Old Web
English
Sign In

Turing test

The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give. The test was introduced by Turing in his 1950 paper, 'Computing Machinery and Intelligence', while working at the University of Manchester (Turing, 1950; p. 460). It opens with the words: 'I propose to consider the question, 'Can machines think?'' Because 'thinking' is difficult to define, Turing chooses to 'replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.' Turing describes the new form of the problem in terms of a three-person game called the 'imitation game', in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: 'Are there imaginable digital computers which would do well in the imitation game?' This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that 'machines can think'. Since Turing first introduced his test, it has proven to be both highly influential and widely criticised, and it has become an important concept in the philosophy of artificial intelligence. Some of these criticisms, such as John Searle's Chinese room, are controversial in their own right. The question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind. René Descartes prefigures aspects of the Turing test in his 1637 Discourse on the Method when he writes: .mw-parser-output .templatequote{overflow:hidden;margin:1em 0;padding:0 40px}.mw-parser-output .templatequote .templatequotecite{line-height:1.5em;text-align:left;padding-left:1.6em;margin-top:0} Here Descartes notes that automata are capable of responding to human interactions but argues that such automata cannot respond appropriately to things said in their presence in the way that any human can. Descartes therefore prefigures the Turing test by defining the insufficiency of appropriate linguistic response as that which separates the human from the automaton. Descartes fails to consider the possibility that future automata might be able to overcome such insufficiency, and so does not propose the Turing test as such, even if he prefigures its conceptual framework and criterion. Denis Diderot formulates in his Pensées philosophiques a Turing-test criterion: 'If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation.'

[ "Algorithm", "Epistemology", "Artificial intelligence", "Cognitive science" ]
Parent Topic
Child Topic
    No Parent Topic