Thursday, September 13, 2012


An Evaluation of the Chinese Room
A psychologist’s attempt to explain computers

In this article, John R. Searle of Berkeley’s Department of philosophy attempts to discourage the idea that a computer could ever “think”.  That is, he is arguing against the notion of strong AI, which I believe would do more for psychology than 100 years of human study (incidentally, it would probably require 100 years of human study to create a strong AI). 

The test used to disprove strong AI is this:  You are locked in a room with a large batch of Chinese writing, which cannot understand at all.  You are then given another batch of symbols with instructions on how the two batches correlate.  A third batch is then given to you, along with instructions to give back symbols from the first two batches based on symbols from the third batch.  Now say that you get so good at doing this, that you can reply to any combination of symbols with the proper Chinese characters so that no Chinese person in another room would be able to tell that you didn’t speak a word of Chinese just by asking you questions (a version of the Turing test).  But the fact remains that you do not at all understand Chinese.
Searle tries to prove his point by oversimplifying it to the point that anyone could see that he was correct in his example.  He is practically describing an encryptor that converts Chinese characters to other Chinese characters, instead of English characters to other English characters (that would make no sense to us).  No one argued that an encryptor understood English, but that’s essentially what he’s arguing. 

He addresses the “Systems” reply, first by calling it embarrassing, but more irritating is the claim he makes at the end of this reply.  He quotes a man from 1979 who says machines as simple as a thermostat can have beliefs.  Now, I do not believe at all that a thermostat has a belief in the literal sense, but Searle uses the absurdity of the statement in his argument.  He actually says “One gets the impression that people in AI who write this sort of thing think they can get away with it because they don’t really take it seriously, and they don't think anyone else will either. I propose for a moment at least, to take it seriously.”  Why??  It’s just poor debate practice.

My favorite argument is the brain simulator reply, “what if we write a program to simulate the synapses and neural firing of a human brain?”  This sounds legit to me, but Searle breaks it down into a man operating valves and pipes in such a way to mimic neural firings, based on instructions he’s been given to output Chinese answers.  The pipes and man still have no understanding of Chinese.  Now I just want to say that synapse firing in the brain is all chemistry and physics, which are the instructions of the universe.  But…that means Searle proves another valuable point:

Humans Can’t Understand


I’m pretty much out of space, but I still want to list the biggest issue with even arguing AI ever.  He didn’t define his terms.  At no point does he say what it even means to understand something; he didn’t list what qualifies as a belief.
In addition, 60% of the way through the paper, he basically recants everything he said for the first 10 pages.  He acknowledges that an exact artificial replica of a human would be able to think, and that a program could think, because minds are programs, but a program within a computer could not think.  He disagrees with himself, so I really can’t be swayed by his argument.

No comments:

Post a Comment