John Searle has become the object of accusations of improper conduct.
These accusations even have some people in the world of academic philosophy saying that instructors in that world should try to avoid teaching Searle's views. That is an odd contention, and has given rise to heated exchanges in certain corners of the blogosphere.
At Leiter Reports, I encountered a comment from someone describing himself as "grad student drop out." GSDO said: "This is a side question (and not at all an attempt to answer the question BL posed): How important is John Searle's work? Are people still working on speech act theory or is that just another dead end in the history of 20th century philosophy? My impression is that his reputation is somewhat inflated from all of his speaking engagements and NYRoB reviews. The Chinese room argument is a classic, but is there much more to his work than that?"
I took it upon myself to answer that on LR. But here I'll take it as an excuse to be explicit about the "Chinese room" for those of my readers who may not be familiar with that thought experiment.
The image arose as part of an argument against "strong AI," that is, against the idea that the human mind can be replicated by a digital computer because the human mind is in essence the software of a computer -- while the brain is the hardware.
Searle asks us to imagine this. A person who does NOT know Chinese is sitting in a room. Let us say ... me. The room also contains two slots opening to the outside world, one on my right, one on the left. The room has a book with elaborate instructions on how to fill out certain sheets of paper when certain stimulus is received. It also has pencils, erasers, filing cabinets, and whatever else of a mechanical nature may be needed for the following task.
The stimulus arrives in the form of a piece of paper coming in from the slot on my left side. The instruction book (which is composed entirely of syntactic rules not semantic rules -- that is, there is nothing in it about what the stimulus "means," or even a consensus that THAT question would have meaning), tells me that the shapes on this paper require that I put various other shapes on an until-now-blank piece of paper.
This leads to my possession of a newly filled up piece of paper that has characters on it that are again (like the characters on the stimulus) incomprehensible to me, except that they are the outcome of this process. Then I slide THAT paper out the slot to my right side.
It may well be that to someone on the outside, this is a question and an answer. Perhaps the Chinese characters on the stimulus had really meant, "Who is this fellow 'Socrates' of whom I have heard?" Perhaps the Chinese characters on the paper I slide out of the room really mean (to those Chinese speaking folks outside), "He was the founding figure of the European philosophical tradition."
If I recall correctly, Searle did not use the Skinnerian terms "stimulus" and "response" in this context. I have used them because I think them appropriate, that the strong AI functionalism of which he is complaining here has a lot in common with the Skinnerian view of "verbal behavior."
Searle's point is that I don't "know Chinese," and the room broadly doesn't know Chinese, even if the proceedings within the room fool outsiders. What I lack (what the room as a whole lacks) is what Searle calls intentionality. My actions are utterly without regard to the reality of Socrates or any reality outside of the room.
More broadly, Searle contends that something about the human brain has a certain causal power. It creates this wonderful thing, intentionality. Nothing going on in a computer has that causal power -- neither hardware nor software, neither the operating system nor the memory -- nor any of them together. Thus, the thesis of "strong AI" is false.
I just thought I'd put this out there today.