Hey! more arguements for people to pick holes in! Please do it, I need to know before it gets read by the lecturer, he’s the most cycnical and easy-to-pick-holes in anything guy I know!
*******
I myself spent some time this afternoon thinking about how intelligence could have possibly evolved, using principles of natural selection, and gene ‘fitness’. The basic premis of this theory is traits which allow you to survive and therefore breed will be favoured, and inherited. I am going to suggest now how our intelligence could have evolved in this way, and what questions it raises for the field of AI.
- Humans evolution is generally considered to have occurred at a time of unpredictable change. The world as a whole was less stable, weather more erratic, situations more variable. Also, remains of early species have been found mostly in Africa, an open bush savannah, which would have contained a lot of large predators and been a fairly challenging environment in which to live.
- Therefore, the more knowledge they have about the world around them, the more likely individuals are able to cope (the more ‘adaptive’ they are), and the more likely to breed. By knowledge, I include base instincts which individuals are born with, and facts and information learnt during their life. This does not mean intelligence, it just means learning, and a good memory. Rats, and dogs, and all kinds of animals show this behaviour, but yet we do not think this means they are intelligent in the way we are.
- If knowledge is a benefit, selection pressures favour having more knowledge. The first species of human, Homo habilis, had a brain 30% larger than the bipedal ape from which it was orginally descended – this could indicate a larger store for knowledge than previous species. What also made humans different is their predisposition to form a complex communication -language- which meant that more knowledge could be obtained, and faster. Going back to chimps and dolphins; we consider them more intelligent than cats or dogs, or maybe gibbons and sharks. One of the major differences is that dolphins and chimps appear to have rudiments of language.
- However, if individuals have too much knowledge with no way to access the bits they need quick enough, then selection will not favour them. If it takes you too long to recall that the animal over there is dangerous, then you end up dead.
- So , succesful individuals either need to have a limited amount of knowledge , or a way of managing and sorting the knowledge they do have – which could lead to emergence of intelligence as an adaptive mechanism. Interestingly enough, the more recent human species, Homo sapiens neandertalenis, had a larger brain capacity than the modern human (Homo sapiens sapiens). However, this was with smaller speech centers and forebrain – the brain regions associated with language and intelligent control. Both these species are thought to have existed at the same time, but could it be possible that Neanderthals were evolved in the direction of more(but limited) knowledge capacity with less developed intelligence? Certainly, the tools they used did not change very much over the life time of the species, in comparison to the rapid advances made in technology by early modern humans.
So, intelligence may have occured through evolution. It certainly occurred somehow, and evolution seems a good idea, especially since there is a difference between the brain, mind and behaviour between closely related human species, as well as other species. But what does all this mean for AI? Anything at all?
It could be that complex communication and interaction between individuals is an important factor in being intelligent. If I had been raised in a box completely cut off from other humans, would I have any intelligence? It could be that intelligence is indeed an adaptive mechanism acquired by evolution – in which case it is likely to be very complex, messy, and hard to unravel. In a way, it would be reminiscent of computer legacy systems; it may look like there is a lot of redundant code in these computers, but yet no-one is sure exactly how the core works, and what will happen if seemingly unused bits of code were to be removed. If indeed the ‘code’ for intelligence is in anyway equivalent to this, will we ever be able to unravel it all? Conversely, if we manage to ‘evolve’ code into a mind, or an intelligence, will we understand what we have done?
Strong AI enthusiasts may argue that everything mentioned in my evolutionary theory could concievable be modeled on a computer – data acquistion, storage, memory, operating systems. Yes, it is quite east to express our intelligence in this way. However, there seems to be a big difference between talking about it, and actually managing to build it.
Neural network systems and principles of connectionism point towards systems which can learn, and therfore maybe be built into something similar to our minds. The big problem with these set-ups though, is that once the system has been taught, we do not know anything about what is going on between inputs and outputs. This means that changing the knowledge it contains requires a complete repeat of the learning process.
Problems may look like they are easily converted to computer programs, but this is often because we work on over simplifications. Again, one thing a computer program needs is a detailed description. The only thing we have, and know for certain, is that our mind has intelligence.
Trying to build strong AI without first studying and observing our own intelligence is like trying to build a plane without ever having looked at a bird.