I've seen this argument in a few places, and it seems to me that those people drew the wrong conclusions from it.
In the Chinese Room is someone who doesn't understand Chinese. But they have instructions, in English, that tells them how to manipulate symbols; basically they have an algorithm. They take an input (which is some sentence in Chinese), run through the algorithm and output some other symbols (a reply in Chinese).
Now, to the person from the outside, it may look like the process is able to speak in Chinese, but has there been any actual understanding of Chinese? Is there any actual conscience involved in the process that is cognizant of what is happening? (Beyond just someone running an algorithm on things they don't understand.)
This is usually presented in terms of Artificial Intelligence, and shows that even if we have a machine that can converse in Chinese, the machine doesn't actually understand what is going on. And thus it doesn't have a mind, and we need something else to get actual understanding...
However, what this reads to me is: if we can't tell that the machine isn't speaking Chinese, how do we know that someone who reportedly does speak Chinese actual does so? Can we demonstrate that people understand and aren't just machines processing according to an algorithm?
This opens up the idea of philosophical zombies, and Searle, who created the Chinese Room idea, replies with "well of course we know that people are different with understanding and stuff"... which, to me, is special pleading.
Now, admittedly, this is a rather Operationalist view of the world, which leads to other problems, but if you can't tell the difference between a sophisticated replication and the real thing, is there a difference?
[END]
Saturday, 20 September 2014
Understanding the Chinese Room
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment