John Searle brings up his "Chinese room" argument to debunk a "strong artificial intelligence" or "computer functionalism" argument for consciousness. Computer functionalism claims that if computers are given the right type of software they can become conscious. The "Chinese room" argument imagines up a thought experiment where a human is given a rule book for answering questions in the Chinese language. The rule book allows him to shuffle around different Chinese characters as the rules indicate for him. The rules given to him are so robust, that this human is able to pass a Turing test for fluency in Chinese. Yet no where along this process did the meaning of the Chinese words ever flow through this human's consciousness (assuming the human didn't already know Chinese ). Similarly, an AI that can pass a Turing test for speaking with humans is only shuffling around symbols based on rules, and there is little justification for claiming that the AI ever becomes conscious of the meaning behind the symbols they are shuffling around.
top of page
bottom of page