The Chinese Room: Are Computers Conscious?

Photo courtesy of Pexels.com.

Photo courtesy of Pexels.com.

You wake up in a small room. There’s a door with a slot at the bottom, a large binder, and a pen. As your eyes adjust to the bright light, a piece of paper slides under the door. You reach over and see that it’s covered in Chinese characters. Confused, you go open the huge binder: it’s a huge list of instructions written in English. The first instruction tells you to write a list of Chinese characters if you see a certain string of characters written on the paper. You notice that the Chinese on the paper and in the instruction match, and so you carefully draw out the corresponding lines of Chinese. After this, you pass the paper back under the door, only for it to return with more Chinese written on it. 

This goes on for hours, with you referring back to the binder to know what to write, until the door opens and a Chinese man pokes his head in. He looks surprised, saying, “Wow, you speak Chinese very fluently!” Of course, he says this in Chinese, so it’s lost on you. All this time, he thought he was speaking with a native Chinese speaker, but you were only manipulating symbols according to a set of rules.

Forgetting about why you might find yourself in this kind of situation, it’s an argument that can actually teach us a lot about machine consciousness. For John Searle—the creator of this argument—it proves that it’s impossible for computers to be conscious.

Here’s how: in the Chinese Room, you are the computer’s processor and the binder of instructions are the “program.” The Chinese writings given to you from beneath the door are the “input,” and your responses are the “output.” In this way, Searle’s Chinese Room represents the basic functions of a computer. 

Now, the question at the centre of all this has to do with whether the person inside the Chinese Room is able to speak Chinese or not. The answer to this question will answer another, more important one: Is a computer that appears to be conscious actually conscious, or just faking it? We’ve all seen computers “acting” very lifelike, whether they be in science-fiction or in real-life. Apple’s Siri can hold a reasonably complex conversation about anything from the weather to how to bake apple pie.

But does Siri really know how to bake? They might appear like they know, but to Searle, they’re faking it just as much as the person in the Chinese Room is faking knowing Chinese. Whenever Siri is asked a question, it refers to its own binder of instructions—the internet—and then regurgitates whatever it finds there.

It might sound very convincing, and it might respond so quickly you could swear another human was speaking for it, but at its core it is just a machine running a complex program at a speed sufficient enough to fool you. Just like the person inside the Chinese Room, programs like Siri aren’t aware of the symbols and concepts they’re controlling. They’re just following a bunch of rules.

So next time you worry that robots are going to take over, have no fear: they might be terrifying, but they won’t be conscious of it. That’s some consolation, if there is any.

If you’re hankering for more information on Searle’s “Chinese Room,” check out a paper written on it at the Internet Encyclopedia of Philosophy.


And if you’re looking to learn about a similar thought-experiment, I wrote a two-part blog series on the “Ship of Theseus Paradox,” which you can read here.


Matthew Montopoli

Matt is in his second year of Algonquin’s Professional Writing program. He enjoys writing, editing, reading history and philosophy, and not talking about himself.