Expectation and surprise determine neural population responses in the ventral visual stream.

Egner, T., Monti, J. M., and Summerfield, C.
J Neurosci, 30:16601–16608, 2010
DOI, Google Scholar


Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se.


In general the design of the study is interesting as it is a fMRI study investigating the effects of a stimulus that is presented immediately before the actually analysed stimulus, i.e. temporal dependencies between sequentially presented stimuli of which predictability is a subset (actually priming studies would also fall into this category, don’t know how well they are studied with fMRI).

While the original predictive coding and feature detection models are convincing, the feature detection + attention models are confusing. All models seem to lack a baseline. The attention models are somehow defined on the “differential FFA response” and this is not further explained. The f b_1 part of the attention models can actually be reduced to b_1.

Katharina noted that, in contrast to here where they didn’t do it, you should do a small sample correction, if you want to do the ROI analysis properly.

They do not differentiate between prediction error and surprise in the paper. Surprise is the precision-weighted prediction error.

An embodied account of serial order: How instabilities drive sequence generation.

Sandamirskaya, Y. and Schöner, G.
Neural Networks, 23:1164–1179, 2010
DOI, Google Scholar


Learning and generating serially ordered sequences of actions is a core component of cognition both in organisms and in artificial cognitive systems. When these systems are embodied and situated in partially unknown environments, specific constraints arise for any neural mechanism of sequence generation. In particular, sequential action must resist fluctuating sensory information and be capable of generating sequences in which the individual actions may vary unpredictably in duration. We provide a solution to this problem within the framework of Dynamic Field Theory by proposing an architecture in which dynamic neural networks create stable states at each stage of a sequence. These neural attractors are destabilized in a cascade of bifurcations triggered by a neural representation of a condition of satisfaction for each action. We implement the architecture on a robotic vehicle in a color search task, demonstrating both sequence learning and sequence generation on the basis of low-level sensory information.


The paper presents a dynamical model of the execution of sequential actions driven by sensory feedback which allows variable duration of individual actions as signalled by external cues of subtask fulfillment (i.e. end of action). Therefore, it is one of the first functioning models with continuous dynamics which truly integrates action and perception. The core technique used is dynamic field theory (DFT) which implements winner-take-all dynamics in the continuous domain, i.e. the basic dynamics stays at a uniform baseline until a sufficiently large input at a certain position drives activity over a threshold and produces a stable single peak of activity around there. The different components of the model all run with dynamics using the same principle and are suitably connected such that stable peaks in activity can be destabilised to allow moving the peak to a new position (signalling something different).

The aim of the excercise is to show that varying length sequential actions can be produced by a model of continuous neuronal population dynamics. Sequential structure is induced in the model by a set of ordinal nodes which are coupled via additional memory nodes such that they are active one after the after. However, the switch to the next ordinal node in the sequence needs to be triggered by sensory input which indicates that the aim of an action has been achieved. Activity of an ordinal node then directly induces a peak in the action field at a location determined by a set of learnt weights. In the robot example the action space is defined over the hue value, i.e. each action selects a certain colour. The actual action of the robot (turning and accelerating) is controlled by an additional color-space field and some motor dynamics not part of the sequence model. Hence, their sequence model as such only prescribes discrete actions. To decide whether an action has been successfully completed the action field increases activity in a particular spot in a condition of satisfaction field which only peaks at that spot, if suitable sensory input drives the activity at the spot over the threshold. Which spot the action field selects is determined by hand here (in the example it’s an identity function). A peak in the condition of satisfaction field then triggers a switch to the next ordinal node in the sequence. We don’t really see an evaluation of system performance (by what criterion?), but their system seems to work ok, at least producing the sequences in the order demonstrated during learning.

The paper is quite close to what we are envisaging. The free energy principle could add a Bayesian perspective (we would have to find a way to implement the conditional progression of a sequence, but I don’t see a reason why this shouldn’t be possible). Apart from that the function implemented by the dynamics is extremely simple. In fact, the whole sequential system could be replaced with simple, discrete if-then logic without having to change the continuous dynamics of the robot implementation layer (color-space field and motor dynamics). I don’t see how continuous dynamics here helps except that it is more biologically plausible. This is also a point on which the authors focus in the introduction and discussion. Something else that I noticed: all dynamic variables are only 1D (except for the colour-space field which is 2D). This is probably because the DFT formalism requires that the activity over the field is integrated for each position in the field every simulation step to compute the changes in activity (cf. computation of expectations in Bayesian inference) which is probably infeasible when the representations contain several variables.

Encoding of Motor Skill in the Corticomuscular System of Musicians.

Gentner, R., Gorges, S., Weise, D., aufm Kampe, K., Buttmann, M., and Classen, J.
Current Biology, 20:1869-1874
 , 2010
DOI, Google Scholar


Summary How motor skills are stored in the nervous system represents a fundamental question in neuroscience. Although musical motor skills are associated with a variety of adaptations [[1], [2] and [3]], it remains unclear how these changes are linked to the known superior motor performance of expert musicians. Here we establish a direct and specific relationship between the functional organization of the corticomuscular system and skilled musical performance. Principal component analysis was used to identify joint correlation patterns in finger movements evoked by transcranial magnetic stimulation over the primary motor cortex while subjects were at rest. Linear combinations of a selected subset of these patterns were used to reconstruct active instrumental playing or grasping movements. Reconstruction quality of instrumental playing was superior in skilled musicians compared to musically untrained subjects, displayed taxonomic specificity for the trained movement repertoire, and correlated with the cumulated long-term training exposure, but not with the recent past training history. In violinists, the reconstruction quality of grasping movements correlated negatively with the long-term training history of violin playing. Our results indicate that experience-dependent motor skills are specifically encoded in the functional organization of the primary motor cortex and its efferent system and are consistent with a model of skill coding by a modular neuronal architecture [4].


The authors use PCA on TMS induced postures to show that motor cortex represents building blocks of movements which adapt to everyday requirements. To be precise, the authors recorded finger movements which were induced by TMS over primary motor cortex and extracted for each of the different stimulations the posture which had the largest deviation from rest. From the resulting set of postures they computed the first 4 principal components (PCs) and looked how well a linear combination of the PCs could reconstruct postures recorded during normal behaviour of the subjects. This is made more interesting by comparing groups of subjects with different motor experience. They use highly trained violinists and pianists and a group of non-musicians and then compare the different combinations of who is used for determining PCs and what is trying to be reconstructed (violin playing, piano playing, or grasping where grasping can be that of violinists or non-musicians). Basis of comparison is a correlation (R) between the series of joint angle vectors as defined in Shadmehr1994 which can be interpreted as something like the average correlation between data points of the two sequences measured across joint angles (cf. normalised inner product matrix in GPLVM). Don’t ask me why they take exactly this measure, but probably it doesn’t matter. The main finding is that the PCs from violinists are significantly better in reconstructing violin playing than either the piano PCs, or the non-musician PCs. This table is missing in the text (but the data is there, showing mean R and its standard deviation):

R violinists pianists non-musicians

violin 0.69+0.09 0.63+0.11 0.64+0.09

piano 0.70+0.06 0.74+0.06 0.70+0.07

grasp 0.76+0.09 0.76+0.09 0.76+0.10

what is not discussed in the paper is that pianists’ PCs are worse in reconstructing violin playing than PCs of non-musicians. An interesting finding is that the years of intensive training of violinists correlates significantly with the reconstruction quality for violin playing of violinist PCs while it is anticorrelated with the reconstruction quality for grasping indicating that the postures activated in primary motor cortex become more adapted to frequently executed tasks. However, it has to be noted that this correlation analysis is based on only 9 data points.

In the beginning of the paper they show an analysis of the recorded behaviour which simply is supposed to ensure that violin playing, piano playing and grasping movements are sufficiently different which we may believe, although piano playing and grasping apparently are somewhat similar.

Efficient Reductions for Imitation Learning.

Ross, S. and Bagnell, D.
in: JMLR W&CP 9: AISTATS 2010, pp. 661–668, 2010
Google Scholar


Imitation Learning, while applied successfully on many large real-world problems, is typically addressed as a standard supervised learning problem, where it is assumed the training and testing data are i.i.d.. This is not true in imitation learning as the learned policy influences the future test inputs (states) upon which it will be tested. We show that this leads to compounding errors and a regret bound that grows quadratically in the time horizon of the task. We propose two alternative algorithms for imitation learning where training occurs over several episodes of interaction. These two approaches share in common that the learner’s policy is slowly modified from executing the expert’s policy to the learned policy. We show that this leads to stronger performance guarantees and demonstrate the improved performance on two challenging problems: training a learner to play 1) a 3D racing game (Super Tux Kart) and 2) Mario Bros.; given input images from the games and corresponding actions taken by a human expert and near-optimal planner respectively.


The authors note that previous approaches of learning a policy from an example policy are limited in the sense that they only see successful examples generated from the desired policy and, therefore, will exhibit a larger error than expected from supervised learning of independent samples, because an error can propagate through the series of decisions, if the policy hasn’t learnt to recover to the desired policy when an error occurred. They then show that a lower error can be expected when a Forward Algorithm is used for training which learns a non-stationary policy successively for each time step. The idea probably being (I’m not too sure) that the data at the time step that is currently learnt contains the errors (that lead to different states) you would usually expect from the learnt policies, because for every time step new data is sampled based on the already learnt policies. They transfer this idea to learning of a stationary policy and propose SMILe (stochastic mixing iterative learning). In this algorithm the stationary policy is a linear combination of policies learnt in previous iterations where the initial policy is the desired one. The influence of the desired policy decreases exponentially with the number of iterations, but also the weights of policies learnt later decrease exponentially, but stay fixed in subsequent iterations, i.e. the policies learnt first will have the largest weights eventually. This makes sense, because they will most probably be closest to the desired policy (seeing mostly samples produced from the desired policy).

The aim is to make the learnt policy more robust without using too many samples from the desired policy. I really wonder whether you could achieve exactly the same performance by simply additionally sampling the desired policy from randomly perturbed states and adding these as training points to learning of a single policy. Depending on how expensive your learning algorithm is this may be much faster in total (as you only have to learn once on a larger data set). Of course, you then may not have the theoretical guarantees provided in the paper. Another drawback of the approach presented in the paper is that it needs to be possible to sample from the desired policy interactively during the learning. I can’t imagine a scenario where this is practical (a human in the loop?).

I was interested in this, because in an extended abstract to a workshop (see attached files) the authors referred to this approach and also mentioned Langford2009 as a similar learning approach based on local updates. Also you can see the policy as a differential equation, i.e. the results of the paper may also apply to learning of dynamical systems without control inputs. The problems are certainly very similar.

They use a neural network to learn policies in the particular application they consider.

Modeling discrete and rhythmic movements through motor primitives: a review.

Degallier, S. and Ijspeert, A.
Biol Cybern, 103:319–338, 2010
DOI, Google Scholar


Rhythmic and discrete movements are frequently considered separately in motor control, probably because different techniques are commonly used to study and model them. Yet the increasing interest in finding a comprehensive model for movement generation requires bridging the different perspectives arising from the study of those two types of movements. In this article, we consider discrete and rhythmic movements within the framework of motor primitives, i.e., of modular generation of movements. In this way we hope to gain an insight into the functional relationships between discrete and rhythmic movements and thus into a suitable representation for both of them. Within this framework we can define four possible categories of modeling for discrete and rhythmic movements depending on the required command signals and on the spinal processes involved in the generation of the movements. These categories are first discussed in terms of biological concepts such as force fields and central pattern generators and then illustrated by several mathematical models based on dynamical system theory. A discussion on the plausibility of theses models concludes the work.


In the first part, the paper reviews experimental evidence for the existence of a motor primitive system located on the level of the spinal cord. In particular, the discussion is centred on the existence of central pattern generators and force fields (also: muscle synergies) defined in the spinal cord. Results showing the independence of these from cortical signals exist for animals up to the cat, or so. “In humans, the activity of the isolated spinal cord is not observable, […]: influences from higher cortical areas and from sensory pathways can hardly be excluded.”

The remainder of the article reviews dynamical systems that have been proposed as models for movement primitives. The models are roughly characterised according to the assumptions about the relationships between discrete and rhythmic movements. The authors define 4 categories: two/two, one/two, one/one and two/one, where a two means separate systems for discrete and rhythmic movements, a one means a common system, the number before the slash corresponds to the planning process (signals potentially generated as motor commands from cortex) and the number behind the slash corresponds to the execution system where the movement primitives are defined.

You would think that the aim of this excercise is to work out advantages and disadvantages of the models, but the authors mainly restrict themselves to describing the models. The main conclusion then is that discrete and rhythmic movements can be generated from movement primitives in the spinal cord while cortex may only provide simple, non-patterned commands. The proposed categorisation may help to discern models experimentally, but apparently there is currently no conclusive evidence favouring any of the categories (authors repeatedly cite two conflicting studies).

Winnerless competition between sensory neurons generates chaos: A possible mechanism for molluscan hunting behavior.

Varona, P., Rabinovich, M. I., Selverston, A. I., and Arshavsky, Y. I.
Chaos: An Interdisciplinary Journal of Nonlinear Science, 12:672–677, 2002
DOI, Google Scholar


In the presence of prey, the marine mollusk Clione limacina exhibits search behavior, i.e., circular motions whose plane and radius change in a chaotic-like manner. We have formulated a dynamical model of the chaotic hunting behavior of Clione based on physiological in vivo and in vitro experiments. The model includes a description of the action of the cerebral hunting interneuron on the receptor neurons of the gravity sensory organ, the statocyst. A network of six receptor model neurons with Lotka-Volterra-type dynamics and nonsymmetric inhibitory interactions has no simple static attractors that correspond to winner take all phenomena. Instead, the winnerless competition induced by the hunting neuron displays hyperchaos with two positive Lyapunov exponents. The origin of the chaos is related to the interaction of two clusters of receptor neurons that are described with two heteroclinic loops in phase space. We hypothesize that the chaotic activity of the receptor neurons can drive the complex behavior of Clione observed during hunting.


see Levi2005 for short summary in context

My biggest concern with this paper is that the changes in direction of the mollusc may also result from feedback from the body and especially the stratocysts during its accelerated swimming. The question is, are these direction changes a result of chaotic, but deterministic dynamics in the sensory network as suggested by the model, or are they a result of essentially random processes which may be influenced by feedback from other networks? The authors note that in their model “The neurons keep the sequence of activation but the interval in which they are active is continuously changing in time”. After a day of search for papers which have investigated the swimming behaviour of Clione limacina (the mollusc in question) I came to the conclusion that the data schown in Fig. 1 likely is the only data set of swimming behaviour that was published. This small data set suggests random changes in direction, in contrast to the model, but it does not allow to draw any definite conclusions about the repetitiveness of direction changes.

The role of sensory network dynamics in generating a motor program.

Levi, R., Varona, P., Arshavsky, Y. I., Rabinovich, M. I., and Selverston, A. I.
J Neurosci, 25:9807–9815, 2005
DOI, Google Scholar


Sensory input plays a major role in controlling motor responses during most behavioral tasks. The vestibular organs in the marine mollusk Clione, the statocysts, react to the external environment and continuously adjust the tail and wing motor neurons to keep the animal oriented vertically. However, we suggested previously that during hunting behavior, the intrinsic dynamics of the statocyst network produce a spatiotemporal pattern that may control the motor system independently of environmental cues. Once the response is triggered externally, the collective activation of the statocyst neurons produces a complex sequential signal. In the behavioral context of hunting, such network dynamics may be the main determinant of an intricate spatial behavior. Here, we show that (1) during fictive hunting, the population activity of the statocyst receptors is correlated positively with wing and tail motor output suggesting causality, (2) that fictive hunting can be evoked by electrical stimulation of the statocyst network, and (3) that removal of even a few individual statocyst receptors critically changes the fictive hunting motor pattern. These results indicate that the intrinsic dynamics of a sensory network, even without its normal cues, can organize a motor program vital for the survival of the animal.


The authors investigate the neural mechanisms of hunting behaviour in a mollusk. It’s simplicity allows that the nervous system can be completely stripped apart from the rest of the body and be investigated in isolation from the body, but as a whole. In particular, the authors are interested in the causal influence of sensory neurons on motor activity.

The mollusk has two types of behaviour for positioning its body in the water: 1) it uses gravitational sensors (statocysts) to maintain a head-up position in the water under normal circumstances and 2) it swims in apparently chaotic, small loops when it suspects prey in its vicinity (searching). In this paper the authors present evidence that the searching behaviour 2) is still largely dependent on the (internal) dynamics of the statocysts.

The model is as follows (see Varona2002): without prey inhibitory connections between sensory cells in the stratocysts make sure that only a small proportion of cells are firing (those that are activated by mechanoreceptors according to gravitation acting on a stone-like structure in the statocysts), but when prey is in the vicinity of the mollusk (as indicated by e.g. chemoreceptors) cerebral hunting neurons additionally excite the statocyst cells inducing chaotic dynamics between them. The important thing to note is that then the statocysts still influence motor behaviour as shown in the paper. So the argument is that the same mechanism for producing motor output dependent on statocyst signals can be used to generate searching just through changing the activity of the sensory neurons.

Overall the evidence presented in the paper is convincing that statocyst activity influences the activity of the motor neurons also in the searching behaviour, but it cannot be said concludingly that the statocysts are necessary for producing the swimming, because the setup allowed only the activity of motor neurons to be observed without actually seeing the behaviour (actually Levi2004 show that the typical searching behaviour cannot be produced when the statocysts are removed). For the same reason, the experiments also neglected possible feedback mechanisms between body/mollusk and environment, e.g. in the statocyst activity due to changing gravitational state, i.e. orientation. The argument there is, though not explicitly stated, that the statocyst stops computing the actual orientation of the body, but is purely driven through its own dynamics. Feedback from the peripheral motor system is not modelled (Varona2002, argueing that for determining the origin of the apparent chaotic behaviour this is not necessary).

For us this is a nice example for how action can be a direct consequence of perception, but even more so that internal sensory dynamics can produce differentiated motor behaviour. The connection between sensory states and motor activity is relatively fixed, but different motor behaviour may be generated by different processing in the sensory system. The autonomous dynamics of the statocysts in searching behaviour may also be interpreted as being induced from different, high-precision predictions on a higher level. It may be questioned how good a model the mollusk nervous system is for information processing in the human brain, but maybe they share these principles.