Can Chinese Rooms Think?
There’s a tendency as a machine learning or CS researcher to get into a philosophical debate about whether machines will ever be able to think like humans. This argument goes so far back that the people that started the field have had to grapple with it. It’s also fun to think about, especially with sci-fi always portraying AI vs human world-ending/apocalypse showdowns, and humans always prevailing because of love or friendship or humanity.
But there’s a tendency for people in such a debate to wind up talking past each other.
“Machines can never be surprised!”
“We can’t even simulate ONE neuron right!”
Eventually, someone more “well-versed” in philosphy will whip out Searle’s “Chinese Room” argument: Suppose a robot in a room full of Chinese characters on cards, takes Chinese characters as input and follows its programming to produce Chinese characters as output. It does this convincingly enough to pass the Turing test. Now imagine if a non-Chinese speaker were provided a set of instructions in English, and followed that instruction book to perform the same function that robot would. Now since that person doesn’t understand Chinese, but can reliably perform the task to pass a Turing test, it follows, Searle argues, that the robot doesn’t understand Chinese either.
It’s usually at this point where you have part of the crowd nodding along to this argument and another pointing out a thousand different reasons why it doesn’t map.
“The understanding is the emergent function that both the robot and the instruction book forms!”
I think there’s a more fundamental difference resulting in this polarisation: Some believe human intelligence to be special, unique only to humans. Then there are those, like me, who think that we’re nothing more than glorified machines.
I graduated with a CS degree, and so my close friends are usually software engineers or researchers. As a result, most of them believe themselves to be “People of Science” — non-religious, non-spiritual. But yet I’ve noticed that in these debates, some would often resort to arguments like “machines can never be surprised.” I think this belies a deeper belief they hold that is in contradiction with their “science-y” persona.
I’ve developed a line of questioning over the years that, I think, disentangles the beliefs of someone I’m discussing the AI apocalypse with.
-
Do you believe that your/our mind is a result of the physical world?
This means you don’t think that our mind/consciousness is a result of some other entity belonging in some other world, and is purely the result of the electrons, neurons and synapses’ in your head (there is a name for this). How that works exactly is not being discussed here, just the belief that your conscious mind is the result of the grey goo inside your skull.
-
Do you believe phenomena of the physical world can be simulated?
This is simply asking if physical phenomena can be reliably replicated in a computer, if we had sufficient computational power and sufficient understanding about the universe. How we can deal with the three body problem is not being discussed here, just that with enough knowledge and big enough computers, we could do it.
-
Do you think machines can think like humans?
Now, to me, if you agree to the first two propositions, you should agree to that third one as well: If our mind is of the physical world, and the physical world can be simulated, then our minds can be simulated. Potentially.
Of course, there can be a whole host of reasons why someone might not accept Prop 1. or Prop 2. I’ve heard of neuroscientists who, over extended studying of the human brain, increasingly believe that perhaps our consciousness lies in a different plane of existence. I’ve also had a debate with someone who believes that it’s impossible for machines to simulate physical reality, and not just due to our current limited knowledge of the physical world. You don’t wind up spending time debating past each other if this is the case. Instead, dig down and ask why they believe these things. Be open to these viewpoints.
If they accept Prop 1. and Prop 2. but reject Prop 3., I take some time to revel in their segfault as they realise there is a contradiction in their own belief systems.
I’m more about being consistent in your own system of beliefs than pushing my we’re-all-meat-robots ideology. After all, I don’t know if I’m right, and I mean, how cool would it be if it turned out our minds are a separate entity, and that we are special?
@misc{tan2018-08-27,
title = {Can Chinese Rooms Think?},
author = {Tan, Shawn},
howpublished = {\url{https://blog.wtf.sg/posts/2018-08-27-can-chinese-rooms-think/}},
year = {2018}
}