Home Research BCCN I Project Details Action descriptions and action chaining
Personal tools

Action descriptions and action chaining

Computational Neuroscience

The action of, for example, fixing a wooden board with some screws on a wall requires complex sequencing of actions from the level of high level abstract planning (screwdriver, screws, board, wall) to that of fine-tuning your movements (pick-up screwdriver, pick-up screw, point it against board, turn screwdriver’s tip towards screw, insert into slit of screw, press and turn, etc.). Those of you who have ever read the fixing instructions of do-it-yourself furniture assembly know that oral descriptions of such action sequences are next to hopeless (hence the nice pictures in the IKEA instruction leaflets…). The puzzle, however, is that our brain’s neurons do not have this problem. In fixing furniture or playing the piano, very complex action sequences are being learned, remembered and replayed quite ‘non expressis verbis’. Also there are no awkward kinks (non-differentiable points) between the elements of such a sequence. Everything is nice and smooth.

Quite frustratingly, on the side of artificial agents, like the ARMAR robot, there is currently no good way to create such action chains. Imagine two robot arms trying to do the board-fixing task. The state-of-the-art at the moment does not allow doing this.

armar

Currently the only smooth, real-time compatible and efficient way of implementing longer movement trajectories uses the principles of imitation learning and a linear weighted regression algorithm (see works of S. Schaal) for the implementation. The drawback of this is that these methods can only be efficiently used for ballistic-like or repetitive movements, mostly consisting of a single trajectory or an oscillation. True sequences, where action-components need to be linked and sometimes even allowing for the interchanging of individual components, which we would call “re-chaining” (first pick-up the screw versus first pick-up the screwdriver…).

The goal of this project is, thus, two-fold:

  1. Measure the structure of human action sequences at multiple joints in set of a complex tasks by ways of a ‘manipulandum’ and a data-glove.

  2. Find a mathematical description of these action sequences, which is suitable for re-chaining and for implementation on a robot.

ManipulandumExperiment

At this point, we are designing 3D virtual environments in which a test person can manipulate objects via a haptic device (see figure, SensAble technology).  Several toy experiments demand the testperson to perform a task consisting of several subtasks. In the pictogramm a simple example is displayed: The red object has to be moved from the left to the right box via the platform on the top right corner on either of the two paths, A1 or A2. The question to ask is whether subsequent actions depend on previous decisions, and how these cognitive choices influence the chaining of actions. The main goal is to extract the influence of cognitive processes and attention on the details of action chaining and the transition between movements.

Computational Neuroscience

Main cooperation partner:

Tamim Asfour (University of Karlsruhe)



Belongs to Group(s):