Introduction to Philosophy: Week 9: The Mind-Computer Analogy

I. Functionalism (the mind as a machine)

You are not required to do the reading on functionalism by Jerry Fodor in IP, pp.192-196, but have a look at it if you want to know more…

A. What is functionalism?

Functionalism is the theory that mental states are functional states. They are called functional states because they are to be understood in terms of their functions, or their causal role. There are different versions of functionalism, two currently popular versions are (1) Black Box Functionalism; and (2) Computationalism. Both favour an analogy between the mind and an information-processing machine. (Fodor holds an even weaker version of Func. than these two.)

B. Black Box functionalism (e.g. David Lewis)

The mind is a black box , to be explained solely in terms of inputs and outputs. The internal workings of the mind, the black box, that transform the input into the output are internal, hidden from our view, and they are of no concern to the theory. Mental phenomena, such as pain, are reducible to the abstract information-processing functions of a black box. (Input-output plus internal information processing)

EX. coke machine:

Input: 50p ---- Output: a can of coke

By analogy, for human minds: EX: Belief that my cat is hungry

Input = sensory input through the five senses: cat rubbing against my leg, meowing; me checking the clock.

This leads to Output: other mental states: wondering if it’s time to feed him; belief that he must be hungry; behaviour: finding the food, opening the tin, etc.

C. Computationalism (AI; Turing Machine Func.; Hilary Putnam and others)

Emphasises the mind-computer analogy; mind is a living computer. Mind and brain use information-processing algorithms to take sensory and other information as input and to compute or process it as output (behaviour).

II. Artificial Intelligence (AI) (The computer as a mind)

Functionalism provides a basis for a research programme in artificial intelligence. Researchers are interested in the analogy of the computer as a mind.

Weak’ AI says that the analogy is methodologically helpful: it helps us to better understand how the mind/brain functions, but ‘Strong AI’ goes so far as to say the ‘mind is a computer’ is not an analogy. Mechanical models of the mind show us two things: that the mind is really just a living computer; and also that computers and other information processing machines have minds; they can think.

A. Strong AI: Computers can think

In the 20th century, this insight was given substance by the British mathematician Alan Turing. Turing was a pioneer in computer theory. In a famous paper ‘Computing Machinery and Intelligence’ (1950) he argued that machines have linguistic ability. Turing supported this thesis by devising a test which demonstrated that computers have linguistic ability.

 

PTO

B. Searle’s Chinese Room Argument: Computers cannot think

Searle’s argument can be seen as a refutation of computationalism, the type of functionalism that gives rise to strong AI, as well as a refutation of the results of the Turing Test as they have been taken on board and strengthened by recent work in AI.

His actual argument begins with his definition of how digital computers work.

The definition consists in giving a formal description of a digital computer’s operations. The operation of it can be specified through the steps in a process which uses abstract symbols = sequences of 0’s and 1’s printed on a tape. A rule or command determines that when a computer is in a particular state and has a particular symbol on its tape then it will perform a particular operation. The symbols themselves have no meaning, they are not about anything. The symbols have only formal or syntactical significance in the operation of the computer.

This fact about the way computers work is what Searle uses to refute the claim that they can think. He simply argues that there is more to having a mind, to having thoughts and consciousness, and mental states, then having formal or syntactical processes.

Minds, mental states, consciousness are about something. Our internal mental states by definition have content. They have intentionality. There is much more to thought than a string of abstract symbols. Mental states have intentionality; ‘mind has more that a syntax, it has a semantics.’ (IP, p. 199). Computer programs have only syntax, a set of formal symbols which lack content. To support this claim Searle uses an example now referred to as ‘The Chinese Room’. It is a counterexample to show that minds are really not like computers. (see IP, 199-200.)

III. Computers and human freedom (if time permits)

*******************************************************************

Star Trek: ‘The Measure of a Man’

This episode of Star Trek (The Next Generation series), ‘The Measure of a Man’, asks whether or not an android (a kind of ‘machine’) could possess consciousness, self-awareness, and even freedom.

The following questions will help you to think about points of the film that are relevant to the mind-body problem:

1. What does the poker game show?

2. Do you think that Data, the android, has sentience (consciousness, self-awareness)? Why or why not?

3. Do you think that Data is the Federation’s property, just like the Enterprise’s computer? Why or why not?

4. If it can be proved that Data has sentience, does this make us any less human (i.e. more like machines)?

5. What do you think it means to be human? Is it possible to define humans in terms of an essence (a definition specifying necessary characteristics)?