Language 1

Contents

Introduction: teaching computers to talk

The Chinese room argument

Response

Understanding

The Chinese Room Argument

Reading:

Summary: John Searle presents what is called the Chinese Room argument designed to show that there is more to understanding than manipulating uninterpreted symbols.

Introduction: teaching computers to talk proper

The fact that human beings use language apparently bulks very large in our life. Often it is said that it is language use that distinguishes human beings from other animals. Often it is said that the one thing you will never be able to do is to get a machine to use at as well.

This last claim I find wonderfully clarifying. It tells you how to progress the language question. You can agonize for ages over the nature of language - what it is for a word or sentence to have meaning, etc. etc. - but the machine claim brings it all into focus and presents us with an addressable challenge: what do you have to do to program a machine so that it uses language as well as we do?

The medievals thought language and thought to be inextricably linked. It is difficult to work out exactly what they thought, but the outline seems to be that the same faculty that gives us the power to think thoughts which have any generality about them also and at the same time gives us the power to use language. (See Kenny, Aquinas on Mind, London, 1993, Routledge. )

Let's have a go then at programming a computer to use language.

No problem these days with getting it to speak in the sense of making audible sounds which can be mistaken for human speech - one of the things you can't take away from RailTrack. The easiest bit is to get it to read a script. Much more difficult, but possible, is to get it to make up a script from a set of parts. For example to start with 'We apologize for the late running of …..' and then to choose from a list the one you want to apologize for on that occasion. But you can immediately see how you might approach this problem. You keep a data base with the times of scheduled arrival times and you keep a clock going. Every five minutes you compare current time with all the scheduled times and if you find a match you choose that train to apologize for.

What Google gives you for 'Elizir'. Courtesy Metropolitan Museum of Art

For this to work you have to be content just to make one announcement per late arrival, and of course you need to be sure that none of the trains will be on time. So no real problem here.

Elizer is a bit more sophisticated, but not much. (You can talk to Eliza here and to another silicon chatter Brian here)

So we can write programs which have a big data base of language items and a set of rules which govern how the machine is to put them together. Some of the rules I am thinking of just govern which items can go with which, but some would link what items were output with what the machine is getting as input.

An example of the first kind of rule would be: before you stop, put in an action-type item (a verb). These rules would be there to veto output like 'sitting mat the is the on cat ' but permit 'the cat is sitting on the mat'.

An example of the second type of rule would be:

Only output 'the cat is sitting on the mat' if the data you have includes the fact that the cat is indeed sitting on the mat.

Is this enough?

If we had an enormous data base, with items sorted into a very carefully designed set of categories, and a large number of carefully drawn up rules, would we be able to get our machine to do language?

John Searle is responsible for a famous argument that purports to show that we would not.

No matter how big our list of components, no matter how complicated or extended a set of rules we might devise for governing how they linked together we would not have a machine that could speak English, say, or French.



You will have to think away the windows and kitchen paraphernalia, but this I believe is actually the interior of a Chinese house, courtesy the CWLU Herstory Website

The Chinese room argument

You like Mary before you are sat in a windowless room.

This time communication with the outside world is achieved not via a black and white monitor but through two narrow letterbox slots.

You sit at a desk on which there is a large manual.

I will tell you what is in it in a moment.

By the side of the desk is a large box containing sort of plastic shapes of a wide variety of different kinds.

 
A Chinese Manual, courtesy Lucky Dog Books.

From time to time a series of plastic shapes is pushed through the in-slot.

Your job is to take each sequence as it comes in and try and find a match for it in the manual. The manual is so arranged that when you find a picture of the sequence of shapes that has been pushed through the in-slot, alongside it is printed another picture.

This picture is made up of the shapes that are to be found in the box by the desk.

What you have to do, once you have found a picture of the sequence of shapes that came through the in-slot, is to assemble from the shapes in your box the sequence the manual prints alongside it.

You then push this sequence of shapes you have just assembled out of the out slot.

You do this for many weeks.

It turns out that there is a Chinese speaker outside all this while, and what she is pushing into the letterbox from time to time are questions in Chinese. And what she gets back after a pause - while you are looking the shapes up in the manual and assembling your output sequence from the box - are what she regards as answers, perfectly sensible answers, in perfectly good Chinese, to the questions she has put in.

So you have been generating perfectly good Chinese sentences without actually understanding a word! To you its all plastic shapes and following the manual.

Therefore, says Searle, there is more to understanding than manipulating uninterpreted symbols.

Summary: Computers, which are limited to manipulating uninterpreted symbols, will never be able to understand what a language item (eg a sentence) means.
THE ANALOGY BETWEEN WINDOWLESS ROOM AND A COMPUTER SPELLED OUT

Let me spell out the implications a little more explicitly.

Computers work by moving around sequences of 0s and 1s. When you type in a word, it gets coded into a sequence of 0s and 1s. A picture that appears on the screen is stored as a sequence of 0s and 1s. Sequences of 0s and 1s are what are stored and manipulated according the machine's program.

(Here is a sequence of 0s and 1s:

000101011101010110)

So a computer is just like the windowless room. A sequence of 'symbols' is pushed in, manipulations are carried out involving those 'symbols' and the other ones that are already in the room - in the box - and a string of 'symbols' is output through the output slot.

Searle's point is: this can go on, but you can't infer from its going on that the machine 'understands'. The Chinese room argument shows that you can have the appearance of understanding without the reality. There is no understanding going on in the Chinese room, although questions are put in and what appear to be intelligible and sensible answers are put out.

Searle:

"Understanding a language, or indeed having mental states at all, involves more than just having a bunch of formal symbols. It involves having an interpretation, or a meaning attached to those symbols." Searle, Minds, Brains and Science, Cambridge, Mass, 1984, Harvard University Press; White, p. 199.

Pause

Before you go on, you might like to try and formulate an objection to the Chinese Room argument. (It's always better to force yourself to write your response down.)

I give a couple of the most influential points below. Yours may be perfectly valid even if it isn't one of them! (And may be of course if it is ...)


Response

These two points have been made:

1. The computer does not correspond to you sitting at the desk arranging the symbols according to the rules in the manual. The computer is the whole windowless room. Who is to say the windowless room doesn't 'understand' the output??

2. If we are going to explain understanding it will be in terms that don't refer to understanding. You can only explain a thing in terms of other things, or your explanation is circular. The cognitive scientist is claiming to explain understanding in terms of the manipulation of representations. It is no objection that his/her explanans does not refer to understanding.

Understanding

Just a warning: understanding has been thought of very differently in the past and we should make no easy assumptions about what it is...

What does Searle think it is?

In Thomist thought for example, material things were thought of as possessing forms, and although it is difficult to give a clear and coherent account of what role the form played in their concept of understanding there is no doubt that it played a role that was central. (Human perception for example was certainly thought of by Aquinas as involving the sharing by the human perceiver of the form of the the thing perceived ('The sense-faculty receives the forms of sense-perceptible things ...' Aquinas, quoted by Kenny, Aquinas on Mind, p.92.) , and the acquisition of concepts was thought of as necessarily taking place in the context of sense perception ('It is impossible for our intellect to perform any actual exercise of understanding except by attending to phantasms' Aquinas, quoted by Kenny, Aquinas on Mind, p.94.)

 


Review Questions

Three questions to check if you are on the right lines:

Would it make any difference to the argument if:-

If you get any of these wrong, and are puzzled why, you should read the argument again - here, or in the reading indicated at the top, or in a textbook; or ask me!

More of a tester:

 

 

END


Revised 30:03:03

222 home