Friday, 15 February 2013

Some useful AL Modules for NAO in Python

1. Behaviours can be installed on the robot with choreograph, and can be executed from a python script or module using ALBehaviourManager.

p=ALProxy("ALBehaviorManager")
p.runBehavior("behaviourManagerExampleProgram")

So primitive behaviours to be executed by actors can be installed on the robot, reflexes, etc... 

2. NAOMarks can tagg the environment, and help the visual system recognise the environment. ALMarkDetection gets the number. The ALMemoryKey is LandMark Detected. It can be subscribed to in the same way as face detected etc... It is a simple way of doing object recognition basically. 

3. Movement detection API. (Mean position and velocity of moving pixels is determined). These may be interesting features to use. MovementDetected is the ALMemoryKey. 

Basically it is critical that the action atoms get access to ALMemoryKeys, therefore they have to be python MODULES. 

The only other method is a polling mechanism using getData from the memory, but then you can't get events, and must just poll them. Perhaps ALL the memory keys should simply be obtained. 

How to get all the memory keys! Takes ages, but this might be another way to access ALMemory in a python script. 

q = ALProxy("ALMemory")
list = p.getDataListName()
vars = {}
print len(list)
for key in list:
vars[key] = p.getData(key)

4. Visual Recognition: Objects can be stored into the NAO. You learn them by hand. This database must be uploaded to the robot, and AGAIN, you can know when NAO sees the object by accessing the appropriate ALMemoryKey that the object was saved under, or subscribe to PictureDetected. Feature based detection is used. 

5. ALVision Compass approximates orientation from a visual image, 

Receiving events is critical [and MANY MANY high-level things can be done with it]. 



No comments:

Post a Comment