This time the Dawdlers are attempting to make sense of being in a room where they have to understand Chinese. John Searle sure can’t help! To remedy this, Harland and Ryan take a deep dive in a shallow pool, with a close consideration of John Searle’s Chinese Room Argument in the Philosophy of Mind from his 1980 paper “Minds, Brains, and Programs”. Can computers “understand”? What is the function of intuition in philosophy? What good are “intuition pumps”? What can the Turing Test establish? And more…
Categories: Podcast
1 Comment
Adam · January 21, 2019 at 12:10 pm
To play devil’s advocate, let me make the most extreme pro-Searle argument I can. The world asks “What is it to be conscious? I am definitely conscious, other people are definitely conscious, other animals are to varying degrees and probabilities also conscious. Can conscious beings be created through non-biological processes, like making a robot?”
The functionalist answers: To be conscious is just to have the same input-output behavior as any conscious being has. If your behavior is indistinguishable from the behavior that a conscious being has, we have to call you conscious.
Searle responds: I can reduce that functionalist idea to absurdity. Let’s focus on a sub-property of consciousness, which is just the capacity to understand. I don’t necessarily want to get into what understanding is–that’s not really my goal, and getting into the weeds will get us into a quagmire. So to learn the lesson of Vietnam, I’m going to choose my battles and not fight that one because I don’t really need it. All I need for my reductio ad absurdum is that we agree to some basic things, like that I understand English but not Chinese.
Now if I were in my room with the Chinese text doing my translations and such, I would have the same input-output behavior as a being that understands Chinese. Therefore the functionalist theory would imply that I understand Chinese. But I don’t understand Chinese, a contradiction. QED.
I’m not offering a theory of understanding or a theory of mind, I’m merely arguing against functionalism. Mere black box input-output behavior according to some algorithm cannot be identical with understanding or consciousness more generally. There has to be at least some (possibly additional, or possibly subtractive) property of what’s inside the black box which helps to constitute understanding.
~~~
If I have Searle correct here, it seems right to me that he doesn’t have to tell you what understanding is, so long as we agree that he understands English and not Chinese–because the details of what understanding is doesn’t play a role in what he’s trying to do. It’s kind of like abstraction. You could make an analogy to proofs in mathematics: I don’t have to tell you which triangle I’m talking about when I prove that its interior angles sum to pi, I don’t have to specify all the angles in it. The internal details are not “active” and we can just replace them with variables: “For any angles alpha, beta, gamma, …” and so on until the conclusion “alpha+beta+gamma = pi”. Similarly Searle can say “Whatever it is for x to understand y, as long as we grant that I understand English and not Chinese, …” and so on to the conclusion that “functionalism implies a false statement”.
All that said, I still don’t think is argument is fully convincing. My response is: Sure the person doesn’t understand Chinese. But the person-translation-tools-room-system does understand Chinese.