10th January 2013
1. motorBabbling5.py available here implements CMA-ES (a black box optimisation algorithm cam.fmin) that optimises the weights/parameters of a 3 layer FF neural network (coded in PyBrain). The fitness function is the sum of F-statistics (if significance > 0.05) of the granger causality between motor states and sensor states (where the sensor states are also the inputs to the actor). This was intended to select for NNs that maximise causal influence of the motors on the input sensors.
Currently it is not working because it seems that the granger causality test from python library stats models is not giving sensible results on the very short time-series (20 samples) available from an episode in which the arm is moved by the NN for 20 time units from a supine position resting on the chest of the robot (simulated in Webots for Nao, or using the real Nao connected with a USB cable).
2. motorBabbling6.py available here adds extra sensory inputs as follows...
sen = self.memoryProxy.getListData(["Device/SubDeviceList/US/Left/Sensor/Value",
"Device/SubDeviceList/US/Right/Sensor/Value",
"Device/SubDeviceList/LHand/Touch/Back/Sensor/Value",
"Device/SubDeviceList/LHand/Touch/Left/Sensor/Value",
"Device/SubDeviceList/LHand/Touch/Right/Sensor/Value",
"Device/SubDeviceList/LShoulderPitch/ElectricCurrent/Sensor/Value",
"Device/SubDeviceList/LShoulderRoll/ElectricCurrent/Sensor/Value",
"Device/SubDeviceList/LElbowYaw/ElectricCurrent/Sensor/Value",
"Device/SubDeviceList/LElbowRoll/ElectricCurrent/Sensor/Value",
"Device/SubDeviceList/LWristYaw/ElectricCurrent/Sensor/Value",
"Device/SubDeviceList/InertialSensor/GyroscopeX/Sensor/Value",
"Device/SubDeviceList/InertialSensor/GyroscopeY/Sensor/Value",
"Device/SubDeviceList/InertialSensor/GyroscopeZ/Sensor/Value",
"Device/SubDeviceList/InertialSensor/AccelerometerX/Sensor/Value",
"Device/SubDeviceList/InertialSensor/AccelerometerY/Sensor/Value",
"Device/SubDeviceList/InertialSensor/AccelerometerZ/Sensor/Value",
"Device/SubDeviceList/InertialSensor/AngleX/Sensor/Value",
"Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value"
])
However, the problem here is that the actor FNN becomes unwieldily, with 295 parameter values with a 10-neuron hidden layer. A simpler actor function is required which is sparse, i.e. is only a function of some subset of sensory states. Such subsets could be also specified in the genome of the actor, but these would be discrete valued genotypes, rather than the continuous valued genotypes encoded by CMA-ES. Perhaps CMA-ES could be used in a hybrid manner, to optimise the actor, once the inputs/outputs had been chosen by another optimisation algorithm (e.g. a genetic algorithm).
To properly investigate algorithms that must...
1. choose subsets of sensory inputs and motor outputs for each actor.
2. choose subsets of sensory states for each predictor/compressor.
3. choose subsets of goals/tasks to aim to achieve (i.e. goal-specific fitness functions).
motorBabbling6.py is also modified to take in all the sensed joint angles, and the motor outputs by default specify all the possible joint desired actuation angles. [It may also be possible to define other low level motor specification commands, e.g. cartesian coordinate commands, which are also a means of representing motor output at the lowest levels. Actors will be expected to evolve which type of motor encoding is best, based on whatever fitness function is chosen to select for actors].
When the entire sm(t) space of the robot is included, to make a single FFN actor that takes in all sensory states and tries to control all motors is just crazy. It would be simply impossible to use optimisation to simultaneously optimise a FFN controller for the whole body in one go I think. The dimensionality is just too high. The solution is clear, select for an actor that uses only a subset of sensory inputs to control via the FNN only a subset of motors. An actor should also specify the sensory states that it is trying to influence (these should be different form the sensory states that are inputs to the actor itself). It is based on some function of the observed sensory states that the actor obtains fitness.
To Do on 11th Jan:
1. Add all body sensors as inputs
2. Add all body motor actuation commands as outputs
3. Create a population of actors each defined discretely by a sensory input stream/a motor output stream/and an observed sensory stream). The fitness of the actor will be some function of the observed sensory stream when that actor is acting, e.g. the granger causality between motor outputs and the observed sensory stream.
4. Use a genetic algorithm to evolve actors on the basis of the above fitness function. What kinds of actors evolve? For now, it might be sufficient just to have random actor weights which are not changed, i.e. each actor is effectively a random motion, but its a subset of random motions that are causally effective, i.e. where each actor has high GC on its observed sensory dimensions. This seems like a good first step in chunking the control function into meaningful components.
I would like to thank you for the efforts you’ve put in writing this site. www.naohumanoid.fii.me , but your nao humanoid robot online very cheap.
ReplyDelete