Functionalism 3

Contents

Overheads

Teleological functionalism

Most confusingly, there is another theory of the brain and mind which takes the computer/brain analogy as its basis.

It is not a radically different theory, and it is one that builds on the basic vision of machine functionalism rather than junks it.

It insists basically that we need to develop the brain 'program' idea to do better justice to the details we know about the brain as the guidance system of a distinctively biological system.

Its defenders - eg Sober - think that if we do so we find defences for some of the objections to which machine functionalism is vulnerable.

Let's be clear: most confusingly, 'functionalism' is being used in two quite different senses when one speaks of 'machine functionalism' on the one hand and 'teleological functionalism' on the other.

Let me explain the 'function' involved in 'teleological functionalism'

The meaning of 'functionalism' in 'teleological functionalism': functional analysis

In this sense of 'functionalism', there is such a thing as 'functional analysis'.

The teleological functionalist approach looks on the brain and nervous system as a machine capable of being analysed functionally. That is to say, we can look at it as made up of a set of subsystems, each of which does a particular job in keeping the system as a whole working.

A running car would be a simpler example of something we can analyse functionally. There is a system to feed fuel into the cylinder, a system to keep the pistons positioned ready for each explosion, a system to transmit the up and down movement of the pistons to the wheels and so on.

In any particular car, each of these is done with a particular piece of hardware. The pistons for example might be aluminium in one model and steel in another. Different materials, maybe a different arrangement: but same function.

From the point of view of functional analysis, you could say that cars with petrol engines are all functionally the same. There are lots of differences in how in detail the functions are carried out, but carried out they are. If it is a petrol engine something has to feed the fuel into the cylinders. That is a function that has to be performed. And there are plenty of others - functions that have to be performed if the car is to keep running, but functions which might be carried out in a wide variety of different ways, using different materials, and different designs.

Often what is said is that the same function may be realized in different ways.

 

PROMPT

Can you think of examples of where the same function is realized in different ways?

Suggestion

 

It has long been suggested that we should think of the brain functionally in the sense I am now explaining - as a machine which it is sensible to analyse functionally. This bit does this, that bit does that.

That is the sense of 'functional' that is in play when we talk of the teleological functionalism as a theory of brain/mind.

 


Reprise: machine and teleological functionalism contrasted

How does it differ from the sense of 'functionalism, in 'machine functionalism'?

In machine functionalism you are to think of the programme running on the brain. Suppose in the ordinary course of continuously scanning my environment an image of a face registers on my retina.

One subroutine of the brain programme, let us imagine, asks for the size of the nose in relation to the rest of the face to be calculated - as part of the project of checking if I know the face.

A cluster of neurons somewhere in the brain takes on this task.

In one sense of 'function', it is carrying out the evaluation of a function. The relative size of the nose will be expressed in terms of a mathematical expression, and the cluster of neurons will be evaluating this.

The function for calculating the relative size of the nose will be complex. But let's take a simple example. Take x=2y. y is here a 'function' of x. This means that you can calculate x by applying a rule to y. The rule is: multiply it by 2.

So the cluster of neurons calculating the relative size of the nose is calculating a function. It is applying a rule to the data on the retina and coming up with a result.

According to the machine functionalist, if I am conscious of scrutinizing the nose at that point, it is this calculation which is this awareness of mine.

When machine functionalism claims that a bit of conscious experience is to be identified with the function being carried out by the neurons, this is the sense of function it is invoking. The program is calculating a function, and this is what is to be identified with our experience of scrutinizing the nose.

The machine functionalism is saying that there are three levels of description appropriate to this phenomenon:

Cluster of neurones firing away

Calculation they are doing (or 'function' they are calculating)

Phenomenal experience of scrutinizing the nose.

 

How does this contrast with the teleological functionalist?

When the face recognition sub routine is running, suppose phenomenologically we experience looking at the face. This experience is now identified with the function of face recognition.

The 'three levels of description now are:

neurons firing

function being performed (face recognition)

phenomenal experience of looking at the face


Calculation and consciousness: a problem with machine functionalism avoided by teleological functionalism

Summary: Would any system performing the calculations performed by the brain be capable of consciousness? Machine functionalism implies Yes.

Machine functionalism identifies the feeling of pain (to return to the simplest example) with the running of a particular sequence of the brain's software. It is the calculation that that sequence of coding specifies and which is being carried out.

It is a great strength of this account that the calculation specified in the software is such that it could be carried out on a variety of different hardware platforms. You could have the same software running not on the c-fibres and other cells making up the human brain but on say a silicon based brain - the Martian badger. Any system of a certain sophistication could run the software, and carry out this or that particular sequence of it.

That is the strength of the Machine functionalism view, because it supports the idea that you can think of the possibility of pain afflicting animals which don't have our type of brain. It supports you in the thought that maybe a Dolphin, or a even a Martian badger, feels pain.

The trouble is, the theory implies that any calculating device that can do what the brain can do is capable of feeling pain, and thus of being conscious.

This is a problem because if we are thinking of the brain as a computer, we can imagine all sorts of systems as capable of being a computer of equivalent power.

For example: we made an adding machine the other day (class demonstration) out of compliant human beings sitting in this room. There was absolutely no problem in setting out the rules which people could follow without any difficulty, and without any understanding of addition.

To make a really powerful computer with human beings you would need an awful lot of them - lots of memory pigeonholes to store a very big program, but in principle it could be done. We made [in the class demonstration] a simple adding machine, and that is the kind of simplicity that suffices for a computer's repertoire of basic operations.

Suppose we made one. Suppose we made a computer out of people just following a long - horridly long - sequence of simple unambiguous instructions. And suppose we used this computer to run the program that is currently running on my brain. When my brain is running the program and I pick up a very hot plate it quickly performs the calculation to work out how severe the burn is. I yell, and also have a horrible phenomenal experience.

The machine functionalist says that this horrible sensation I have is the calculation being performed to work out how serious the damage to my tissue is.

But I am asking us to imagine the same calculation, same program, running not on my brain but on the computer I am imagining us having assembled out of people.

It would follow then, would it not, that the horrible burning feeling will exist somehow in the people-computer.

Some argue that this is a reductio ad absurdum of the machine functionalist position.

It is absurd to think that a computer made up of millions of people could have phenomenal experience.

If you don't want to think of a computer made of people, think of one made of cogs and levers - Babbage's Analytical Engine, for example. If we made one of these of the necessary size and speed, and ran the program on it which is now running on my brain, would it have the same phenomenal experiences I have?

Or think of a computer built in the modern way, with silicon chips. What is your answer there?

Teleological functionalism evades this problem

Critics of machine functionalism (eg Elliott Sober) feel that this line of thought is decisive. And they - some of them anyway - expound Teleological Functionalism as not encountering the same difficulty.

For with teleological functionalism phenomenal experience is identified with the biological function being performed at the relevant subroutine of the program, so that unless the alternative to my brain was a structure that was organized very similarly to mine, there would be no grounds for thinking it would feel as I do.

So the criticism of the machine functionalism is this. For any phenomenal experience it is committed to maintaining that there being a bit of the program running on the brain which, when what it says is actually being carried out by the brain, is that phenomenal experience. This implies that if we can devise another way of getting that particular bit of program run on another platform, that other platform would have the phenomenal experience too.

 

Prompt: Do you see this as a problem for machine functionalism? Or just an implication which may run out to be true?

 

So (to repeat) machine functionalism implies that if we can devise another way of getting a particular bit of the program which normally runs on the brain to run on another platform, that other platform would have the phenomenal experience too.

Teleological functionalism has no such implication. What it says is that phenomenal experiences are identical with the performance of biological functions. So you wouldn't expect another hardware platform to have phenomenal experience unless it was a system which could be analysed functionally on similar lines to the brain. Only then could you say, eg, the function of this part of the program is face recognition, whether it is running on the brain or on some super-computer. If you can say this, then it follows that when this function is being carried out, if it is identified with the phenomenal experience of 'seeing a face' in the one case, then it must be so identified in the other.

So teleological functionalism is prepared to say that the face-recognition subroutine when it runs on the brain is identical with the phenomenal experience of seeing a face', but if you just take that one subroutine and run it on another platform there will be no phenomenal experience. In this case the calculation is being performed on the non-brain platform, but not as part of a whole system that can be analysed functionally.

But if you had a non-brain complete working system, which was so sophisticated you could analyse it functionally such that in fact it turned out to be organized functionally along the same lines as the brain, then yes, you would have to ascribe phenomenal experiences to this system as it ran the program that normally runs on the brain.

The Matrix, (Larry and Andy Wachowski, USA, 1999) pic courtesy Film et Culture

BoxOffice Online Review

Another, courtesy Currentfilm

Consciousness in the Matrix

Think of the Matrix case, where a person's brain is connected up to a computer which inputs data with the object of giving the owner of the brain the belief that they are going about a world in an everyday way.

The brain and its software now have few if any biological functions. Before a person gets matrified we can say that the function of their damage assessment subroutine has a function - without it the continuing operation of the system as a whole would be threatened. - the survival of the person would be in jeopardy. But once matrified, can we say that the damage assessment subroutine has a function at all? The matrified system - person with brain connected up to the Matrix computer - doesn't need it, would carry on perfectly well without it.

But for the teleological functionalist, consciousness is tied to biological function. No biological function, no consciousness. They would have to cast doubt then on the idea that a matrified person might be conscious.

The old way of thinking about the Matrix idea was in terms of a 'brain in a vat'. Teleological functionalism seems to be implying that a brain in a vat could not have mental states.

 

Prompt: Do you think there is a difficulty here for the 'brain in a vat' idea, or that the problem is actually one for teleological functionalism?

 

Pointer forward: outstanding issues for teleological functionalism

Intentionality & qualia

 

END


Revised 27:11:02

222 home