Computational Motor Learning

The goal of this research is to identify the computational algorithms that the nervous system uses to learn to perform skilled movement. We then hope to develop improved techniques for movement training, rehabilitation, and exercise by manipulating these algorithms. We develop models at a computational level because, although computations in the motor system are performed using neurons and neurotransmitters, ultimately these neural elements must solve problems of Newtonian mechanics in order to move limbs skillfully. Linking the "low level" operation of the motor system to "high level" movement performance is an outstanding problem of great interest. Solving this problem will dramatically aid in generating novel rehabilitation strategies that combine molecule or cell-based treatments with movement practice.

Adaptation to Novel Dynamic Environments
Much of this work is experimental, using human volunteers as research subjects. For example, PhD. Student Craig Takahashi led studies that investigated how noise, youth, muscle fatigue, and neurological injury affect the way people form internal models of novel dynamic environments.

The Motor System as a Computational Slacker
Using data from experiments, we then seek to develop computational models of motor learning. One model we developed was part of the dissertation project of Jeremy Emken, in collaboration with UCI Mechanical Engineering professors Prof. Jim Bobrow and Prof. Thanasis Sideris (see Emken et al. 2007). This model can be used to predict how people alter the force output from their muscles when they move in a novel dynamic environment generated by a robotic device. Specifically, we found that people adapt to a robot-applied force field during walking as if they were performing a greedy minimization of a cost function that is the sum of kinematic error and effort. The fact that the motor system considers effort as a cost means that it is a "slacker"; that is, it always attempts to perform the next movement with less effort than the previous one. This has implications for robot-assisted movement training because assisting a person in moving with a robot may have the unintended consequence of causing the person to slack, thereby reducing learning. We have worked to develop anti-slacking algorithms for robot-assisted training after stroke (Wolbrecht et al. 2008).

Training Algorithms Derived from Computational Models of Motor Adaptation
Our ultimate goal is to use computational models to predict how to make people learn faster or better. For example, based on our modeling work, we have derived a training algorithm that amplifies movement errors in order to accelerate motor adaptation (Emken et al. 2005). We have also show how it is possible to provide assistance in a way that allows people to learn a novel force field without experiencing large errors (Emken et al. 2007). This technique may be useful when large errors are catastrophic, such as in gait training.

When is Haptic Guidance Useful?
Other studies from our lab have examined the role of haptic guidance in learning to perform motor skills. For her dissertation project, Jiayin Liu found that haptic experience of a novel movement trajectory does not help you to learn to perform that trajectory better than just watching it (Liu et al. 2006). However, former Ph.D. student Laura Marchal found that haptic guidance can help people learn how to steer a new vehicle (Marchal et al. 2008, Marchal et al. 2010). We are exploring use of haptic guidance to help teach kids with a severe disability to drive a powered wheelchair (Marchal et al. 2011).

An important direction for future research is to define the conditions under which various robotic training algorithms can most effectively be applied. Along these lines, postdoctoral-fellow Marie-Helene Milot, working with Laura Marchal, found that the relative effectiveness of haptic guidance and error amplification depended on the initial skill level of the trainee, with error amplification more useful for more skilled trainees (Milot et al. 2010).