Sunday 6 January 2008

If the Brain is a computer...

If the brain is a computer and the mind its workings, is this a fitting analogy of the computer and its software? What would happen if we had dedicated computers with a huge numbers of neuron circuits? Would intelligence develop? Would we be able to understand it?

"If the brain is a computer and a mind its workings" is not a particularly fitting analogy of a computer and its
software. Certainly you could see how the latter is representative of the former but the computer just does as it
is told, rather than being able to figure things out for itself.

The question "would intelligence develop" has two answers, I think. It has been proven in studies that
intelligence (and we must bear in mind that there is more than one definition of this) can develop inside closed
environments, inside 'worlds' that are subject only to a pre-defined set of stimuli. In his dissertation The
Evolutionary Emergence Route to Artificial Intelligence Alistair Channon sets out to "develop AIs that can grasp
profoundly new situations on their own". He concludes that "some of the observed behaviours could indeed be
considered intelligent if only at a very low level" (1996). Within the system boundaries (a computer program)
intelligence existed but it is unlikely that this intelligence would pose a threat to the modern world - it has no
way, for example, to keep itself alive if the power to the computer it is running on is cut.

The problem is that much of our experience of Artificial Intelligence is seen working within 'Expert Systems'
rather than within the world that we all live in. A neural network that decides if my mortgage application can be
approved or not is intelligence but only within the boundaries of learning how to process data related to a
mortgage application. If it was fed data about a blood test or the communication between two computers it would
still try decide whether to approve a mortgage or not.

"When we examine very simple level intelligence we find that explicit representations and models of the world
simply get in the way. It turns out to be better to use the world as its own model" (Brooks, 1991). Brooks argues
that we need to build intelligence in components - starting with a very simple autonomous system that is
intelligent in the real world, and then building on this. This approach gives agents basic survival skills -
knowing how to avoid danger, knowing to get the equivalent of food and water, and where to get this from. Then
higher levels of intelligence are built on these basic layers, the process mirroring human biological evolution
from single cells to beings comprised of billions of inter-operable neurons. I think this biological approach is
how machines will become intelligent, rather than simply stringing together thousands of processors into a neural
network.

References:

Channon, A (1996) The Evolutionary Emergence Route to Artificial Intelligence [Online] University of Sussex
Available from http://www.channon.net/alastair/msc/adc_msc.pdf (Accessed 6th Jan 2008)

Brooks, R. A. (1991) Intelligence without representation [Online] Cambridge: MIT
Available from http://pigeonrat.psych.ucla.edu/200C/Brooks%201991%20Intel%20without%20rep.pdf (Accessed 6th Jan
2008)

No comments: