Evolutionary Pathology 1
The fitness graph above shows an evolutionary pathology. On the x-axis is generations and on the y-axis is fitness for a pop size of 10. There are sudden drops in fitness.
Why is this?
It is because the fitness function was that nao should maximise the sum of its Z accelerometer. However, occasionally this results in nao falling over onto its front! When this happens the actor molecules that have been fit in the supine posture are no longer fit in the prone posture. This seems to have happened three times over the above run, and each time it happens, the actors are even worse in the prone posture. One of the reasons is that diversity in the population is small to start with and it seems to be lost over the run.
What are the ways around the above problem that sometimes the robot gets itself into a position where the reset movement to get back into its supine position no longer works and the robot is then in a different posture context, which is really deserving to be a new niche where actor molecules should be assessed for that niche, and not compete with the original posture niche (context). How can new posture niches be established?
One solution is from the domain of learning classifier systems (XCS). See the beautiful algorithm of Butz and Wilson in the link above. John Holland as well as inventing GAs invented learning classifier systems which are intended to model the brian. We've shown that they can be used for simple language learning, see here.
1. Each sensory atom encodes an exploit posture space in which it is active and is subject to being assessed and replaced in a microbial tournament. If this sensory condition is not met then it is not subject to selection in this round. If no atom exists that is valid in a given sensory condition then a 'covering mechanism' generates a new set of atoms for that condition. This would result in the emergence of species in the same geographical location, each tailored to specific sensory condition, e.g. a posture niche. The sensory conditions can be over subsets of sensory states, NOT the whole sensory space, i.e. this is equivalent to including a HASH character is standard XCS algorithms. By this means, self-specified selection islands are created.
2. Some selective pressure must then be exerted additionally to remove molecules that are never activated, otherwise evolving unusual or very tight sensory conditions would be a recipe for neural immortality.
A comment from Chris Jack...
ReplyDeleteReflexive attention diversion?
Please see my most recent blog post, bit about robot limitations at the end. http://itsmrjack.com/2013/04/15/week-2-report-2/
In this case I think why shouldn't the sudden flip of position shift attention to the actor domain dealing with that position? Or is this a bit simplistic?
I know that when I used to fall off my bmx (used to ride it quite seriously) it would be at speed and force and I could not have helped my attention being turned immediately to the act of landing (preferably on all fours) should something have gone wrong in whatever stunt I was doing just prior and my position end up in an unpredicted, unstable state.
Chris Jack
art/research
itsmrjack.com
discus jockeying/sound art
soundcloud.com/itsmrjack
I think the attentional spotlight is a good idea, to focus limited processing resources to sensory regions of predicted salience.
ReplyDeleteAt the moment, there are no conditional actions being activated. Each molecule in the population is tested randomly and there are no sensory conditions that limit when a molecule is active.
The sensory conditions I describe would act as attentional spotlights in a sense, because at each time a new molecule is going to be taking control, ONLY those molecule where the sensory conditions are satisfied currently will be possible to choose form to be active.
Those molecules pay attention to specific parts of the sensory space, and only get active if those conditions are met.
Of-course, those attentional fields/conditins are evolvable properties of that molecule. SO you evolve what you pay attention to.
But you're talking about INSTRUCTED attentional shifts. Yes, thats exactly what the COVERING MECHANISM in XCS does. Read that XCS paper in the link below…