|
|
I haven't forgotten the quick way in which I introduced Dennett's argument for the objectivity of the pattern in human behaviour that we are allegedly picking up on when we apply folk psychology. Rather than expand on that introduction immediately I want to explain first Dennett's notion of the 'intentional stance'. That will enable me to go over the argument I finished with last time much more effectively.
I then want to give an argument for the thesis that although the pattern picked up on by folk psychology is objective that doesn't mean there are inside the brain networks, or patterns of activity that correspond to a person's beliefs and wants. Remember Dennett is arguing that there is a pattern in human behaviour which folk psychology picks up on, but there needn't be a pattern in the activity of the brain that is at all like it:
So the agenda is:
1. The intentional stance
2. Patterns in human behaviour which support folk psychology objectively exist - the Martian physicists,
3. - but that does not mean that similar patterns exist in the brain (Mary Ruth and Sally)
IF WE WISH TO PREDICT AND/OR EXPLAIN THE BEHAVIOUR OF A PHYSICAL SYSTEM, SAYS DENNETT, WE MAY ADOPT ANY OF THREE DIFFERENT STANCES.
If science is on the right lines, it should in principle be possible to predict the behaviour of all physical systems by knowing the position of all the elementary particles they are made up of and the forces impinging on them.
More mundanely, I can predict the behaviour of this table by treating it as a physical object subject to the laws of physics. On that basis I think it will probably stay put for the foreseeable future. We can and do predict the behaviour of lots of things by thinking of them as physical objects subject to familiar physical laws.
'From this stance our predictions are based on the actual state of the particular system, and are worked out by applying whatever knowledge we have of the laws of nature.' (Dennett, in Lycan, p.168)
For a system as complex as a computer running a chess programme, the task of using the physical state of the machine to predict its future states would be prodigious - but possible in principle.
If you take a clock - a clockwork clock - there is an new way in which you might predict its behaviour. You could apply physics to it, and that might enable you to work out that in 15 minutes' time it will show 5 o'clock.
But you could make this prediction on the basis of knowing that that it had been designed to tell the time.
You would then be adopting the 'design' stance.
But sometimes there is a third possibility.
Sometimes you can get good predictions by assuming that the thing is rational. You think of it as wanting this or that, of believing such and such is the way to get what is wanted, and of acting accordingly.
Prompt: Can you think of examples of where you can be quite successful in predicting what a system will do next, even though you have no doubt that it is a purely physical system, by thinking of it as rational in this sense?
Suggestion
You might predict the behaviour of a plant by attributing to it the aim of getting maximum illumination, and the belief that keeping its leaves flat towards the source of light is the way to do this.
Dennett is now in a position to define 'intentional system'.
WHENEVER ONE CAN SUCCESSFULLY ADOPT THE INTENTIONAL STANCE TOWARDS AN OBJECT, THE OBJECT MAY BE CALLED (SAYS DENNETT) AN INTENTIONAL SYSTEM.
He then explains what he means by being able to adopt the intentional stance 'successfully'. This judgement is to be understood as a pragmatic one. The test is whether the system's behaviour can be successfully predicted, 'and most efficiently predicted' ... 'by adopting the intentional stance towards them.' Whether the system in any sense 'really' has intentions, beliefs, thoughts, plans, desires, purposes or not is not the issue. The issue is whether by employing these concepts you can predict what it is going to do. If you can, it is in Dennett's terms an intentional system. If you can't, it isn't.
By this test, some computers, good chess-playing computers included, are intentional systems.
STICH GLOSSES THE DEFINITION THUS: 'ANY OBJECT WILL COUNT AS AN INTENTIONAL SYSTEM IF WE CAN USEFULLY PREDICT ITS BEHAVIOUR BY ASSUMING THAT IT WILL BEHAVE RATIONALLY. ' STICH, IN LYCAN, 2nd Edition, p.88.
Dennett thinks that great swathes of human behaviour can successfully be predicted by taking up towards human beings the intentional stance
-ie by treating them as rational
- ie by treating them as having aims and beliefs and governing their behaviour by pursuing those aims in a way that is informed by the relevant beliefs.
He says this must be because there is a pattern to human behaviour which we pick up on and exploit with our folk psychology explanations and predictions.
Dennett brings out his claim that there must be folk-psychology-supporting patterns by asking us to think of superphysicist Martians. I pointed this out earlier but now we have Dennett's notion of 'intentional stance' and 'intentional system' clearer in our minds,it should be easier to get hold of.
The argument here is meant to speak to physicalists, so if you are not a physicalist you may not let it get off the ground. However as an exercise you could evaluate it on physicalist assumptions.
So we are to suppose that the human being is a physical system.
PROMPT: If all you knew about the human being was that he/she was a physical system, which of Dennett's three stances would be the safest to take if you wanted to predict his/her behaviour?
The Martians are supposed to take the physical stance - and to be such sophisticated physicist that they produce excellent predictions of our behaviour on this basis. They do so because they understand the detail of us as physical systems - just as one of our physicists can predict the behaviour of a thermostat by understanding its physics:
Daniel Dennett, "True Believers: the intentional strategy", reprinted in Lycan, Mind and Cognition, 2nd edition, p.81,2.
Now we can return to the thesis of Dennett's we started with - his theory of belief. His idea was that we needn't think of beliefs as networks, or networks of activation in the brain. We can use the notion of a belief and an aim - we can use these notions to predict human behaviour - without thinking of beliefs and wants as somehow stored up 'in the brain'. Beliefs and desires may be 'theoretical 'constructs' - invented by the theory but not necessarily existing 'as such' in the brain.
Here is an argument which supports this thesis. It appeals to teleological functionalists.
It points out that it should be possible to programme an intentional system in more than one way.
Different programmes can result in the same gross behaviour.
Imagine three different instantiations of one and the same intentional system: Mary, Ruth and Sally.
Mary is a person. Ruth and Sally are two robots, each designed as copies of Mary.
Ruth is designed with the idea that if you are to get a robot to simulate Mary you had better reproduce in the robot what you know of how Mary works. You won't be able to use organic materials, but you will be able to attempt a thorough simulation with silicon chips of the functioning of Mary's organism, and in particular her brain.
If you get it right, Ruth will behave exactly like Mary because you have copied the design of Mary.
Sally, on the other hand, although she too simulates Mary, is designed completely differently. There is no attempt to copy the way Mary's behaviour is produced in Mary.
Sally's input-output pattern is the same as Mary's (and Ruth's), but the internal workings are different.
This means, says Dennett, that Sally is likely to be 'psychologically different' from the other two.
This may come out when something goes wrong. Sally's errors will be different from Mary's. [Think of a cassette version of Mozart's Coronation Concerto compared with a cd. They sound much the same when they are working properly, but they go wrong differently, in ways you can tell.]
If you think of Mary as following the behaviour she does because of the programme built into her, you can think of Ruth as equipped with the same programme. That is why she simulates Mary. They are not physically identical, but they are following the same programme. Mary may be a Mackintosh to Ruth's IBM PC: but both are running WORD 6.
On this analogy - it is of course not just an analogy - Sally is not running the same programme. She is running a different programme, but with the same overall observable effect.
If we think of each system transforming inputs into outputs, stimuli into behaviour, all three perform the same transformations, all three produce the same outputs from the same inputs.
Ruth achieves this by running the same program as Mary. Sally runs a different program, but one that achieves the same transformations.
Because the behaviour of all three is identical, if the behaviour of one of them is predictable by folk psychology, so will be the behaviour of all three.
But ex hypothesi there will be different things going on inside Sally and Mary. They behave the same but this same input-output transformation is achieved by different programming. So even if the beliefs and aims invoked by the folk psychological explanation may be 'there' in one case, they won't be there in the other.
QED
Dennett's overall aim:
to achieve a reconciliation between 'our vision of ourselves as responsible, free, rational agents, and our vision of ourselves as complex parts of the physical world of science.' Dennett, quoted by Stitch, in Lycan, 2nd edition, p.87.
Revised 10:12:02