Discussion about this post

User's avatar
Mike Smith's avatar

A very good intro to the Chinese Room argument, and the common criticisms against it. I think this thought experiment is the epitome of what's wrong with taking philosophical thought experiments as authoritative in the same manner that actual experiments are. Searle's argument is rhetoric for a certain set of intuitions.

On the Wrong Model reply, it's worth pointing out that any artificial neural network you've used to date was implemented with a symbolic program. Neuromorphic computing might change that, but right now, ANNs are built on a symbolic foundation. IIT advocates use this fact to argue that they're still the wrong kind of causal structure.

But to Searle's reply, none of his neurons understand English. In fact, without Broca's area and Wernicke's area, neither does his brain. Yet with all the regions working together, we say he does understand it, which is the System's Reply used for his brain.

But the biggest issue I've always had with this thought experiment is the amount of time it would take Searle to reply to any question. Responding to even the simplest question would likely involve following billions, if not hundreds of billions of instructions. If we assume Searle can perform one a second, and doesn't take sleep, meal, or bathroom breaks, he might finish his first billion instructions in about 30 years. We can make him more productive by wheeling in a computer to help him, but now the original intuition of the thought experiment is weakened, or it should be.

To me, the best thing that can be said about the Chinese Room is it spurs discussion. But what it really does is clarify people's intuitions.

Expand full comment
Fred Brown's avatar

Outstanding article. I understood the Chinese room argument conceptually before, but never really broke it down like this, and this brings it into question in my mind. Thank you for your writing.

Expand full comment
72 more comments...

No posts