Finding the mind

I was reading a blog post on the topic of Data Intensive Scalable Computation and the author referenced John Searle's thought experiment of the Chinese room and I spent a bit of time this morning considering it. Searle imagines an advanced computer that could take in Chinese character inputs and translate symbols -- producing character output that a Chinese speaker would consider valid -perhaps convince the Chinese speaker that in fact they are speaking to a human. Does that imply the computer has intelligence and understands Chinese? I'm sure it is a great topic for discussion in AI circles.
Searle takes the experiment further and imagines a human in a room, receive Chinese characters, consulting with a book or some protocol, and producing valid output (Chinese symbols)-- without any knowledge of Chinese of course. This poses a problem to those that might think the computer somehow passed the Turing test. Does the computer really understand Chinese or is it only manipulating symbols. It begs the question what does Speaking Chinese really mean and is consciousness and the mind the differentiator (if there is one at all) between symbol manipulation and understanding?

Comments