The object was to give you a graphic idea of what one was.
1. It isn't that a path gets burnt from the input layer to the output layer. The whole net is involved each time you put in an input. When tuned/trained, the configuration of the net as a whole is such that when you make an input, the 'right' patter on the output layer is produced.
1. If you have to teach the net the answer, you are in effect programming it.
Compare:
'Here is the solution I want you to produce. Don't come back until you've produced it.'
Whenever you see SH say 'Sh'.
Are they different?
If you were an engineer, you would surely have to make different systems, depending on which of these two you were going to have to do.
___________________________
_____________________________
Supposing you wanted to keep a data bank of faces. (You are the Police, say). The von Neumann machine would need to do this by storing lots of digital data and storing them in the computer memory.
We can think of the 2D problem, and then imagine doing it in 3D.
You could scan in a picture. This divides the picture up into an array of pixels, and generates a piece of data for each pixel.
So impose a grid and note down for each square its colour.
Lots of pixels, a big array of data.
Then, when a picture comes along and you want to know if its 'the same face' you can scan the new one in in the same way, and check it against what you got in memory. You will have to invoke pattern-matching of course, since the data you have for one face will never be identical with the data you've got on another, and yet it will still be possible that they are the same person.
What would be the connectionist approach?
You scan the face in and put the result onto the input layer. The pattern generated on the output layer is the memory.
______________________________
'In some manner, devolving from evolution's blind trials and blunders, densely crowded packets of excitable cells inevitably come to represent the world.'
This is how the Churchlands begin their paper 'Stalking the Wild Epistemic Engine'.
They go on: 'But how can brain be a world-presenter?' How can brains change so that some of their changes consist in learning about the world?'
1. The 'rationalist' tradition. Emphasises the rule-following, language-like aspect of cognition. Pursued by cognitive/computational cognition.
2. The 'naturalistic' approach.
The unit of analysis is the 'propositional attitude' - (such as beliefs. A belief is a attitude towards a proposition (or sentence).) The propositional attitude is the paradigm representational state.
In cognitive activity, the transitions between representational states are a function of the logical relations between the contents of those states.
E.g. I might derive the representational state that danger is approaching by putting together 'a lion is approaching' and 'lions are dangerous'
Such representations and such transitions can be modelled on the von Neumann computer.
(P.301.)
The task of psychology is to work out the program which governs these transitions. And so governs the behaviour of the organism.
The foundation for working out this program are given in folk psychology.
This takes as the starting point animals in general, some of which are enormously simpler than the human being. Language is a late comer on the biological scene - an extremely recent development - and it seems unlikely that language-like structures will have been involved in the cognition before language itself arose.
Pursued by neuroscientists and physiological psychologists.
The naturalist approach is inspired by two thoughts.
One is the conviction that human beings have evolved from very much simpler systems.
The other is that it is normal for first thoughts to be replaced by second thoughts. Folk psychology is a first thought. We should be in the least surprised if it has to be replaced.
Early theories about light and heat and movement and the heavens etc. etc. were completely wrong.
'The brain is unlikely to have been adequately groped by folk theory in the misty dawn of emerging verbalisation.' (p.302.)
The naturalist suggests we view ourselves as 'epistemic engines'.
A knowledge engine.
We have to build knowledge out of what we sense in the environment and what we know already to inform our behaviour, to keep it well-adjusted to our situation.
'The planet abounds in with a wondrous profusion of epistemic engines; building nests and bowers; peeling bark; dipping for termites; hunting wildebeests; and boosting themselves off the planet altogether.... The problem consists in figuring out how epistemic engines work.' (P.302).
In doing so, naturalists suggests de-emphasising language. 'Representations - information-bearing structures - did not emerge of a sudden with the evolution of verbally competent animals.' (p.302)
The Churchlands make much of the metaphor of 'hooking up' to the world. It is a good metaphor.
Think of one of those electric model cars which you direct by remote control. They just do what you tell them.
But if you 'hooked them up' to the actual environment, they would control their own behaviour in the light of what their senses were telling them about their world...
The causal account of how physical brain states could acquire intentionality - could get to stand for things in the environment is tempting. Our senses pick up that a lion is approaching, and this causes a brain state, which then goes on to cause our muscles to move us to safety. The thought is that this state thus comes to represent the proposition 'the lion is approaching' as a result of being caused by the lion's approach.
Brain states may thus be thought to be 'calibrated'. It's like calibrating a thermometer.
'The backbone of what we are calling calibrational content is the observation that there are reliable, regular, standardised relations obtaining between specific neural responses on the one hand, and types of states of the world.' P.308.
The Churchlands say the causal approach is promising, but stops being so if the assumption is that it is a proposition that has to be represented...
Their own example is of a snake, which has what is called a pit organ. You can imagine the pit organ, and the rest of the nervous system, as a tuned - like a net? - to certain features of the environment. It is tuned to go off if a warm moving thing occurs within half a metre.
If you have a set of neural cells which go off in this type of circumstance you can surely say that these cells represent the presence of a warm moving thing in the environment.
But, say the Churchlands, that doesn't mean there is anything with the structure of language in the neural system.
What do they say about more sophisticated organisms?
Organisms which have the capacity to learn, like us. They say this:
'Are we not forced to postulate an entire system of representations, manipulable by the creature? It seems that we are.' (P.309)
But there is no need to suppose that all representations in the different subsystems of the brain must be the same. Representation in the visual system, and the auditory system, the motor system may be different.
They then pose the question: Will some subsystems display the familiar structures of human language? They say Yes, for humans no doubt. But it may play a small role only.