In the Chinese room [thought experiment]?, a person who understands no Chinese sits in a room into which written Chinese characters are passed. The person uses a complex set of rules (established ahead of time) to manipulate these characters, and pass other characters out of the room. The idea is that a Chinese-speaking interviewer would pass questions written in Chinese into the room, and the corresponding answers would come out of the room appearing from the outside as if there were a native Chinese speaker in the room.
It is Searle's belief that if such a system could indeed pass a Turing Test, the person who manipulated the symbols would obviously not understand Chinese any better than he did before entering the room. Searle proceeds in the article to systematically refute the claims of strong AI one at a time, by positioning himself as the one who manipulates the Chinese symbols.
The first claim is that a system which can pass the Turing test understands the input and output. Searle replies that as the "computer" in the Chinese room, he gains no understanding of Chinese by simply manipulating the symbols according to the formal program, in this case being the complex rules. The operator of the room need not have any understanding of what the interviewer is asking, or the replies that he is producing. He may not even know that there is a question and answer session going on outside the room.
The second claim of strong AI which Searle objects to is the claim that the system explains human understanding. Searle asserts that since the system is functioning, in this case passing the Turing Test, and yet there is no understanding on the part of the operator, then the system does not understand and therefore could not explain human understanding.
See also Artificial intelligence