How reward affects motor behavior has been the focus of the motor control field for decades. For instance, it has been shown that monkeys make faster saccades towards rewarded target than non-rewarded ones (Takikawa et al. 2002). It has been suggested that the higher velocity of the saccades, which is linked to a decrease in movement time, was due to temporal discount of reward (Shadmehr et al. 2010). Namely, if movement time to get to the target is larger, the target is less rewarding. In a paper published recently in the Journal of Neuroscience, Joshua and Lisberger (2012) investigated the effect of reward on smooth pursuit eye movements. Smooth pursuit consists in a smooth motion of the eyes that is triggered by the motion of a target in the environment. During smooth pursuit initiation, the eyes smoothly accelerate until eye velocity matches target velocity.
Have you ever tried to tickle yourself?
It is actually not possible.
This impossibility relates to sensory attenuation, namely the reduction in sensitivity of the brain to the sensory consequences of the actions that it has produced. The sensory consequences represent all the changes that the brain can measure via its sensors (vision, touch, sound, muscle elongation ...) and that result from its own action. For instance, someone's own voice sounds very different to him when he hears himself talking than when he hears his own voice from a recording device. Namely, the brain decreases its sensitivity to the voice that it controls in order to preserve its sensitivity to external sounds. This diminished sensitivity is the hallmark of sensory attenuation or sensory cancellation.
Predicted and actual songs cancel each other
Sensory attenuation relies on forward models. The role of a forward model is to predict the future sensory consequences of an action from the motor commands sent to the muscles. These forward models exist in many species, even in insects (Webb 2004). The male cricket rubs one wing against the other in order to produce a song. While the male cricket is singing, some auditory neurons have a reduced responsiveness to its own song. This reduction results from the cancellation between the predicted and actual song produced. In other word, those particular neurons fire very differently in response to a song produced by the cricket and in response to the exact same song produced by another animal.
We explore our visual world by making series of saccades. Saccades are very rapid eye movements (50-60ms) that displace the line of sight from one point to another. This has been very nicely demonstrated by a Russian scientists named Alfred L. Yarbus, who built a device to record eye movements (His book is available here). While looking at a face picture for several seconds, humans perform a series of saccades and scan the entire picture (right panel).
During a saccade, there is a minimum perception of the visual world. Therefore, the part of the visual world that is projected onto the retina (i.e. that is perceived) is very different before and after a saccade. Despite that difference, we do perceive a very stable world. We are even unaware of those eye movements. In this post, I will describe the mechanisms that allow us to perceive a stable visual world.
Prediction drives eye movements
Eye movements allow us to optimize the perception of objects of interest. Perception is particularly accurate when the projection of the objects on the retina falls within the fovea, an area of the retina that contains an especially high density of photoreceptors.
When an object is moving, slow eye movements (smooth pursuit) are used in order to maintain the retinal projection of the object onto the fovea.
However, given the delays in transmitting the information from the retina to the brain and in computing the motor commands to be sent to the eye muscles (100-150ms), predictive mechanisms are necessary in order to avoid a possible large lag between eye position and the moving object.
Prediction at the retinal level
In the introductory post about prediction, I briefly mentioned that prediction occurs at many different levels. Even cells on the retina appear to predict stimulus motion and to alert the brain when the stimulus changes its direction unexpectedly.
The retina corresponds to the tissue at the back of the eye. It consists of different layers of cells. The external layer (closer to the orbit) consists of rod and cones cells, which act as photoreceptors. The activity of these cells is conveyed by bipolar cells to ganglion cells, which are the output cells. In other words, each ganglion cell receives input from thousands of bipolar cells and sends the received information to the brain for processing. Horizontal and amacrine cells are intrinsic connections between retinal cells. Horizontal cells link different bipolar cells together whereas ganglion cells are linked through amacrine cells.
In this post, I want to describe how ganglion cells extrapolate motion information and how they detect discrepancy between predicted and actual motion.
Try to answer the following questions:
Which apple is the heaviest?
Which box is the heaviest?
Which cup is the heaviest?
Those questions are pretty simple as our brain can easily infer the relative weight of objects given their size or any other relevant information. For instance, if the cups are sealed, your brain will infer that the largest cup is the heaviest one. It will use this information in order to adjust the force required to lift those cups. What if one is sealed but empty?
written by Jean-Jacques Orban de Xivry
Scientist in the motor control field.