John R. Searle’s Chinese Room thought-experiment is a perennial problem for the student of Artificial Intelligence (A.I.). Searle thought we were taking A.I. too far; that the sense in which the study should be used, must not claim that a “…appropriately programmed computer literally has cognitive states” (Searle, 1992, p. 67). He does not see A.I. as completely useless in getting us closer to an understanding of the human mind; but his fundamental position is that even the most advanced computer program is always lacking in an essential component possessed by a human brain; therefore no computer will ever be a mind, possessing the same cognitive states as humans. Starting with a brief exposition of Searle’s Chinese Room argument, I will then concentrate on the main points of his argument, looking at where Searle thinks A.I. falls short of a complete explanation of the human mind. The flipside is: Searle also claims to know what it is about the human mind that distinguishes it from a computer and he uses the thorny term, intentionality. I will take a brief look at what connectionists have to say in response to the Chinese Room. Jack Copeland finds Searle’s Systems Reply inadequate, specifically; that the reply is logically invalid and therefore leaves the Systems objection undefeated, or insufficiently responded to. Although one must respect the inventiveness of Searle’s attempt at dismantling the program of A.I., there is no adequate reason to believe that humans, the pink fleshy factories that they are, are privileged bearers of an unmatchable type of intelligence. Even if they are, the Other Minds objection seems to hold ground, without budging from its stranglehold on Searle’s Chinese room argument; it is my belief that the uniqueness of the human mind is unverifiable.
“Whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything” (Searle, 1993, p. 71). Searle arrives at his first conclusion in virtue of the Chinese Room argument. It involves a monolingual English speaker in a room with only these materials: two separate sheets of paper with Chinese scrawled on each of them (it looks like a bunch of meaningless squiggles to the English speaker) and an instruction manual written in English; the instruction manual gives conditional commands, telling the English speaker to choose a certain bit of Chinese from one sheet if the other sheet shows a certain bit of Chinese. The man’s responses makes it seems as though he understands Chinese, because the rules that he is following equip him to make the appropriate outputs in response to the corollary inputs. Searle says that a computer, or a formal symbol system that interprets questions presented to it by following the rules or algorithms it is outfitted with, appears to be intelligent because it is giving the right answer and that the answers in the Chinese Room example are indistinguishable from those a native Chinese speaker would give. Searle believes we entangle ourselves by wrongly ascribing intentionality to artifacts in the world, such as thermostats, telephones and computers – Searle insists that we must start with the presupposition that humans have beliefs and these artifacts do not. In order for Searle’s claim that computer’s lack meaningful understanding to be true, they must be dealing with uninterpreted formal symbols. This argument’s intent is to elucidate something that computers cannot do, namely understand in any meaningful way, that which they are routinely interpreting.
Searle’s other major conclusion is that the specific substance from which human understanding arises, the mushy stuff in our skull, is thus far, the only material basis for the type of understanding Searle trumpets humans as possessing. Therefore silicon, or Searle’s mocking example of empty beer-cans, do not provide a material basis for human intentionality. Searle is taking a more constructive outlook here. In arguing for this point he uses photosynthesis as an analogy: photosynthesis is catalyzed by the causal properties of Chlorophyll. Like in the case of photosynthesis, Searle insists that something going on in the brain presents the causal properties needed for intentionality or genuine understanding to occur.
Andy Clark, Paul and Patricia Churchland and Margaret Boden take a line of thought that leaves open the possibility of a different and newer kind of formal system that might be sufficient for intentionality. Clark takes a closer look at Searle’s brain-simulator reply. In this case, a program is modeled after an actual Chinese brain, chip for neuron: a brain that undoubtedly understands Chinese. Searle’s response is, “…we could imagine an elaborate set of water pipes and valves, and a human switcher, realizing that formal description too. But wherein would the understanding of Chinese reside? Surely the answer is nowhere” (Searle, 1993, p. 78). In this example Searle is aiming to show that it is not the formal properties of the computer that matter, but those causal properties in the brain which allow it to produce intentional states. I mentioned Searle’s photosynthesis example before and how it instantiates his point that it is the brain’s peculiar substance that allows for human thought. Andy Clark asks, “So, what are the properties of the physical chemical stuff of the brain that buy us thought” (Clark, 1993, p. 33)? In response to his own question, Clark includes aspects of a brain or robot’s environment as the important stuff and not the mush in the head. In the dual-processing systems of Connectionists, the machines are equipped to acquire new data and ‘understanding’ from their environments, so all new combinations of thought become available. Basically they think more advanced arrangements of silicon or toilet paper for that matter should be seen with an open-mind. Responding that more advanced A.I. is getting us closer to simulating the human brain is just evading the central question, it does not find a cure for Searle’s main contention, which is a disease upon A.I., but merely extends its life.
Jack Copeland finds Searle’s Systems Reply logically invalid and therefore not a reply at all; leaving the Systems Reply unanswered. The Systems Reply considers the man in the room as part of a wider system consisting of everything helpful to the man in the room: the rulebook, scratch paper, and all the Chinese symbols lie before the man. A computer is then comparable to the whole system, the man as just a component. Searle’s reply says that this objection begs the question. Copeland dismisses Searle’s begging-the-question reply, by pointing out that an inference from the premise that the man in the room (who Copeland calls Joe) does not understand Chinese does not lead to the conclusion that the system doesn’t understand Chinese. Searle’s conclusion does not necessarily follow from his premise. Searle tries to keep his head above water by with this elaboration upon the Chinese Room argument: say Joe memorizes everything in the system, so that the man is now the whole system with all its parts incorporated in his brain. Searle assumes that even if all the parts of the system were added to Joe’s brain, Joe still would not understand Chinese. The objection Copeland follows with seems mistaken to me. He truncates Searle’s elaborated Chinese Room into this proposition: “If Joe can’t do X, then no part of Joe can do X” (Copeland, 1993, p. 129). Copeland then illustrates a bizarre thought-experiment in which a man contains knowledge in part of his brain, but he cannot acknowledge that he knows it because it has been programmed in his brain by robots. The main fault in Copeland’s counter-thought-experiment is that when it comes to understanding, no individual can exert his mind towards the task of say, speaking in a second learned language and still fail to understand what he is doing; unless he is speaking in tongues or stricken with a multiple-personality disorder. In this instance, Copeland is going too far with comparing a computer to a brain, using the computer as the paradigm for intelligence and understanding, rather than the brain, which is where we should begin.
Although I do not think Copeland or the Connectionist’s responses fatally wounded Searle’s argument, I do think there are flaws in Searle’s argument that take away its punch. No matter what Searle says a computer is capable of, his insistence that there is in all cases something extra that the human mind possesses, puts his argument up against the Other Mind’s reply and accusations that he is a dualist. Searle believes it is something like an a priori truth that intentionality can never arise from silicon. But this hardnosed insistence makes Searle sound like a dualist of some kind (and he has often been called a property-dualist). Any time a philosopher maintains that humans possess some special substance or other (that catalyzes their brains into intentionality) they are liable to be called dualists. Searle can only weasel out of being categorized as some type of dualist by elucidating the physical properties of his prized human cognitive states. The only way to manage that feat is by ignoring the Other Mind’s reply. Ignoring it when you treat your first-person perspective as objectively telling of the way things are in all minds; too quickly precluding computers as capable of the same type of understanding. Either give up the façade or stop throwing around nebulous terms like ‘intentionality’ to encapsulate the uniqueness of the human mind – which are just intimations towards the even more frightening specter, the ghost in the machine.