With the help of Alex Churchill I have realised that the Darwinian Cognitive Architecture proposed here is actually an advance on an old piece of work called VEGA (Vector Evaluation Genetic Algorithm) that does multi-objective optimisation. This is a way of evolving solutions to satisfy MANY goals/games not just one. This is precisely what I am doing, with the additional factor that I am over time also evolving the games, modifying the games so that they optimise a property of the population of solutions. But the short term dynamics for a fixed population of games is best to be understood within the MOOP framework, and so we should not reinvent the VEGA, but learn from what other great minds have achieved before our own lovely minds tried :)
Lets read some stuff on MOO, VEGA etc...
http://politespider.com/papers/general/The%20Good%20of%20the%20Many%20Outweighs%20the%20Good%20of%20the%20One%20Evolutionary%20Multi-Objective%20optimisation.pdf
VERY EXCITING REFERENCE BELOW!!!!
Gosh, Lawrence Fogal 1966 basically had the idea already that intelligence occurs through real artificial evolution in the brain... [None of this Edelmanism nonsense without replicators!]
http://web.cecs.pdx.edu/~mperkows/CLASS_479/LECTURES479/EVO02.PDF
http://books.google.co.uk/books/about/Artificial_Intelligence_Through_Simulate.html?id=75RQAAAAMAAJ&redir_esc=y
This really is a lovely set of experiments by Lawrence Fogal in 1966. Very elegant work I think. They used MOO in weighting prediction accuracy with machine complexity in predicting the next bit in a binary sequence. Finite state machines were evolved to be able to make these predictions.
I believe that an extension with multiple games/objectives, is to consider harder prediction problems, e.g. predicting n steps into the future. In this case one would have to choose which subpopulation of prediction problems to concentrate ones effort on, and this is the kind of goal/game selection that I am referring to.
Then comes David Schaffer (1984) in his thesis. This is a MOO method which simultaneously evolves solutions to optimise each of the objectives separately and uniquely. This is considered a bad thing for MOO but might well be a good thing for game evolution because we want solutions to specialise on games, not to mix games, because the games may be quite contradictory and we may not want them to be played at the same time. It makes NO sense to mix the game of standing up with the game of sitting down for example. They are mutually exclusive games.
From Deb's chapter of the title above..
See also. Laumanns et al 1988 Preditor (Game) - Prey (Solution) Evolution Strategy.
See also. Fourman 1985's binary tournament selection with random choice of game per tournament.
Island Model as an alternative to VEGA
On the suggestion of Alex Churchill it seems that an Island Model might be expected to do just as well as VEGA, if all we want is for there to be distinct species that specialise on tasks. I mentioned that it might be interesting to bias migration between islands on the basis of the fitness of individuals, i.e. if an individual on Island a is good on fitness function/island b, then it has a higher probability of migrating to island b, than say to island c, where it knows it can't play that game.
Island model references.
http://neo.lcc.uma.es/Articles/WRH98.pdf
Will try to program Brian neuronal network atoms tomorrow, inspired by the Otters in London Zoo today!
No comments:
Post a Comment