Action understanding and active inference.

Friston, K., Mattout, J., and Kilner, J.
Biol Cybern, 104:137–160, 2011
DOI, Google Scholar

Abstract

We have suggested that the mirror-neuron system might be usefully understood as implementing Bayes-optimal perception of actions emitted by oneself or others. To substantiate this claim, we present neuronal simulations that show the same representations can prescribe motor behavior and encode motor intentions during action-observation. These simulations are based on the free-energy formulation of active inference, which is formally related to predictive coding. In this scheme, (generalised) states of the world are represented as trajectories. When these states include motor trajectories they implicitly entail intentions (future motor states). Optimizing the representation of these intentions enables predictive coding in a prospective sense. Crucially, the same generative models used to make predictions can be deployed to predict the actions of self or others by simply changing the bias or precision (i.e. attention) afforded to proprioceptive signals. We illustrate these points using simulations of handwriting to illustrate neuronally plausible generation and recognition of itinerant (wandering) motor trajectories. We then use the same simulations to produce synthetic electrophysiological responses to violations of intentional expectations. Our results affirm that a Bayes-optimal approach provides a principled framework, which accommodates current thinking about the mirror-neuron system. Furthermore, it endorses the general formulation of action as active inference.

Review

In this paper the authors try to convince the reader that the function of the mirror neuron system may be to provide amodal expectations for how an agent’s body will change, or interact with the world. In other words, they propose that the mirror neuron system represents, more or less abstract, intentions of an agent. This interpretation results from identifying the mirror neuron system with hidden states in a dynamic model within Friston’s active inference framework. I will first comment on the active inference framework and the particular model used and will then discuss the biological interpretation.

Active inference framework:

Active inference has been described by Friston elsewhere (Friston et al. PLoS One, 2009; Friston et al. Biol Cyb, 2010). Note that all variables are continuous. The main idea is that an agent maximises the likelihood of its internal model of the world as experienced by its sensors by (1) updating the hidden states of this model and (2) producing actions on the world. Under the Gaussian assumptions made by Friston both ways to maximise the likelihood of the model are equivalent to minimising the precision-weighted prediction errors defined in the model. Potentially the models are hierarchical, but here only a single layer is used which consists of sensory states and hidden states. The prediction errors on sensory states are simply defined as the difference between sensory observations and sensory predictions from the model as you would intuitively do. The model also defines prediction errors on hidden states (*). Both types of prediction errors are used to infer hidden states (1) which explain sensory observations, but action is only produced (2) from sensory state prediction errors, because action is not part of the agent’s model and only affects sensory observations produced by the world.

Well, actually the agent needs a whole other model for action which implements the gradient of sensory observations with respect to action, i.e., which tells the agent how sensory observations change when it exerts action. However, Friston restricts sensory obervations in this context to proprioceptive observations, i.e., muscle feedback, and argues that the corresponding gradient may be sufficiently simple to learn and represent so that we don’t have to worry about it (in the simulation he just provides the gradient to the agent). Therefore, action solely tries to implement proprioceptive predictions. On the other hand, proprioceptive predictions may be coupled to predictions in other modalities (e.g. vision) through the agent’s model which allows the agent to execute (seemingly) higher-level actions. For example, if an agent sees its hand move from a cup to a glass on a table in front of it, its generative model must also represent the corresponding proprioceptive signals. If then the agent predicts this movement of its hand in visual space, the generative model must automatically predict the corresponding proprioceptive signals, because they always accompanied the seen movement. Action then minimises the resulting precision-weighted proprioceptive prediction error and so implements the hand movement from cup to glass.

Notice that the agent minimises the *precision-weighted* prediction errors. Precision here means the inverse *prior* covariance, i.e., it is a measure for how certain the agent *expects* to be about its observations. By changing the precisions, qualitatively very different results can be obtained within the active inference framework. Indeed, here they implement the switch from action generation to action observation by heavily reducing the precision of the proprioceptive observations. This makes the agent ignore any proprioceptive prediction errors when both updating hidden states (1) and generating action (2). This leads to an interesting prediction: when you observe an action by somebody else, you shouldn’t notice when the corresponding body part is moved externally, or alternatively, when you observe somebody elses movement, you shouldn’t be able to move the corresponding body part yourself (in a different way than the observed). In this strict formulation this prediction appears to be very unlikely, but formulating it more softly, that you should see interference effects in these situations, you may be able to find evidence for it.

This thought also points to the general problem of finding suitable precisions: How do you strike a balance between action (2) and perception (1)? Because they are both trying to reduce the same prediction errors, the agent has to tradeoff recognising the world as it is (1) and changing it so that it corresponds to his expectations (2). This dichotomy is not easily resolved. When asked about it, Friston usually points to empirical priors, i.e., that the agent has learnt to choose suitable precisions based on his past experience (not very helpful, if you want to know how they are chosen). I guess, it’s really a question about how strongly the agent expects (wants) a certain outcome. A useful practical consideration also is that action is constrained, e.g., an agent can’t move infinitely fast, which means that enough prediction error should be left over for perceiving changes in the world (1), in particular those that are not within reach of the agent’s actions on the expected time scale.

I do not discuss the most common reservation against Friston’s free-energy principle / active inference framework (that people seem to have an intrinsic curiosity towards new things as well), because it has been covered elsewhere (John Langford’s blogNature Neuroscience).

Handwriting model:

In this paper the particular model used is interpreted as a model for handwriting although neither a hand is modeled, nor actual writing. Rather, a two-joint system (arm) is used where the movement of the end-effector position (tip) is designed such that it is qualitatively similar to hand-writing without actually producing common letters. The dynamic model of the agent consists of two parts: (a) a stable heteroclinic channel (SHC) which produces a periodic sequence of 6 continuously changing states and (b) a linear attractor dynamics in joint angle space of the arm which is attracted to a rest position, but modulated by the distance of the tip to a desired point in Cartesian space which is determined by the SHC state. Thus, the agent expects that the tip of its arm moves along a sequence of 6 desired points where the dynamics of the arm movement is determined by the linear attractor. The agent observes the joint angle positions and velocities (proprioceptive) and the Cartesian positions of the elbow joint and tip (visual). The dynamic model of the world (so to say implementing the underlying physics) lacks the SHC dynamics and only defines the linear attractor in joint space which is modulated by action and some (unspecified) external variables which can be used to perturb the system. Interestingly, the arm is stronger attracted to its rest position in the world model than in the agent model. The reason for this is not clear to me, but it might not be important, because action could correct for this.

Biological interpretation:

The system is setup such that the agent model contains additional hidden states compared to the world which may be interpreted as intentions of the agent, because they determine the order of the points that the tip moves to. In simulations the authors show that the described models within the active inference framework indeed lead to actions of the agent which implement a “writing” movement even though the world model did not know anything about “writing” at all. This effect has already been shown in the previously mentioned publications.

Here is new that they show that the same model can be used to observe an action without generating action at the same time. As mentioned before, they simply reduce the precision of the proprioceptive observations to achieve this. They then replay the previously recorded actions of the agent in the world by providing them via the external variables. This produces an equivalent movement of the arm in the world without any action being exerted by the agent. Instead of generating its own movement the agent then has the task to recognise a movement executed by somebody/something else. This works, because the precision of the visual obserations was kept high such that the hidden SHC states can be inferred correctly (1). The authors mention a delay before the SHC states catch up with the equivalent trajectory under action. This should not be over-interpreted, because other than mentioned in the text the initial conditions for the two simulations were not the same (see figures and code). The important argument the authors try to make here is that the same set of variables (SHC states) are equally active during action as well as action observation and, therefore, provide a potential functional explanation for activity in the mirror neuron system.

Furthermore, the authors argue that SHC states represent the intentions of the agent, or, equivalently, the intentions of the agent which is observed, by noting that the desired tip positions as specified by the SHC states are only (approximately) reached at a later point in time in the world. This probably results from the inertia built into the joint angle dynamics. Probably there are dynamic models for which this effect disappears, but it sounds plausible to me to assume that when one dynamic system d1 influences the parameters of another dynamic system d2 (as here), that d2 first needs to catch up with its state to the new parameter setting. So these delays would be expected for most hierarchical dynamic systems.

Another line of argument of the authors is to relate prediction errors in the model with electrophysiological (EEG) findings. This is based on Friston’s previous suggestion that superficial pyramidal cells are likely candidates for implementing prediction error units. At the same time, activity of these cells is thought to dominate EEG signals. I cannot judge the validity of both hypothesis, although the former seems to have less experimental support than the latter. In any case, I find the corresponding arguments in this paper quite weak. The problem is that results from exactly one run with one particular setting of parameters of one particular model is used to make very general statements based on a mere qualitative fit of parts of the data to general experimental findings. In other words, I’m not confident that similar (desired) patterns would be seen in the prediction errors, if other settings of precisions, or parameters of the dynamical systems would be chosen.

Conclusion:

The authors suggest how the mirror neuron system can be understood within Friston’s active inference framework. These conceptual considerations make sense. In general, the active inference framework provides large explanatory power and many phenomena may be understood in its context. However, in my point of view, it is an entirely open question how the functional considerations of the active inference framework may be implemented in neurobiological substrate. The superficial arguments based on prediction errors generated by the model, which are presented in the paper, are not convincing. More evidence needs to be found which robustly links variables in an active inference model with neuroscientific measurements.

But also conceptually it is not clear whether the active inference solution correctly describes the computations of the brain. On the one hand, it potentially explains many important and otherwise disparate phenomena under a common principle (e.g. perception, action, learning, computing with noise, dynamics, internal models, prediction; this paper adds action understanding). On the other hand, we don’t know whether all brain functions actually follow a common principle and whether functionally equivalent solutions for subsets of phenomena may be better descriptions of the underlying computations.

An important issue for future studies which aim to discern these possibilities is that active inference is a general framework which needs to be instantiated with a particular model before its properties can be compared to experimental data. However, little is known about the kind of hierarchical, dynamic, functional models itself, which must serve as generative models for active inference. As in this paper, it then is hard to discern the properties of the chosen model from the properties imposed by the active inference framework. Therefore, great care has to be taken in the interpretation of corresponding results, but it would be exciting to learn about which properties of the active inference framework are crucial in brain function and which would need to be added, adapted, or dropped in a faithful description of (subsets of) brain function.

(*) Hidden state prediction errors result from Friston’s special treatment of dynamical systems by extending states by their temporal derivatives to obtain generalised states which represent a local trajectory of the states through time. The hidden state prediction errors, thus, can be seen, intuitively, as the difference between the velocity of the (previously inferred) hidden states as represented by the trajectory in generalised coordinates and the velocity predicted by the dynamic model.

An embodied account of serial order: How instabilities drive sequence generation.

Sandamirskaya, Y. and Schöner, G.
Neural Networks, 23:1164–1179, 2010
DOI, Google Scholar

Abstract

Learning and generating serially ordered sequences of actions is a core component of cognition both in organisms and in artificial cognitive systems. When these systems are embodied and situated in partially unknown environments, specific constraints arise for any neural mechanism of sequence generation. In particular, sequential action must resist fluctuating sensory information and be capable of generating sequences in which the individual actions may vary unpredictably in duration. We provide a solution to this problem within the framework of Dynamic Field Theory by proposing an architecture in which dynamic neural networks create stable states at each stage of a sequence. These neural attractors are destabilized in a cascade of bifurcations triggered by a neural representation of a condition of satisfaction for each action. We implement the architecture on a robotic vehicle in a color search task, demonstrating both sequence learning and sequence generation on the basis of low-level sensory information.

Review

The paper presents a dynamical model of the execution of sequential actions driven by sensory feedback which allows variable duration of individual actions as signalled by external cues of subtask fulfillment (i.e. end of action). Therefore, it is one of the first functioning models with continuous dynamics which truly integrates action and perception. The core technique used is dynamic field theory (DFT) which implements winner-take-all dynamics in the continuous domain, i.e. the basic dynamics stays at a uniform baseline until a sufficiently large input at a certain position drives activity over a threshold and produces a stable single peak of activity around there. The different components of the model all run with dynamics using the same principle and are suitably connected such that stable peaks in activity can be destabilised to allow moving the peak to a new position (signalling something different).

The aim of the excercise is to show that varying length sequential actions can be produced by a model of continuous neuronal population dynamics. Sequential structure is induced in the model by a set of ordinal nodes which are coupled via additional memory nodes such that they are active one after the after. However, the switch to the next ordinal node in the sequence needs to be triggered by sensory input which indicates that the aim of an action has been achieved. Activity of an ordinal node then directly induces a peak in the action field at a location determined by a set of learnt weights. In the robot example the action space is defined over the hue value, i.e. each action selects a certain colour. The actual action of the robot (turning and accelerating) is controlled by an additional color-space field and some motor dynamics not part of the sequence model. Hence, their sequence model as such only prescribes discrete actions. To decide whether an action has been successfully completed the action field increases activity in a particular spot in a condition of satisfaction field which only peaks at that spot, if suitable sensory input drives the activity at the spot over the threshold. Which spot the action field selects is determined by hand here (in the example it’s an identity function). A peak in the condition of satisfaction field then triggers a switch to the next ordinal node in the sequence. We don’t really see an evaluation of system performance (by what criterion?), but their system seems to work ok, at least producing the sequences in the order demonstrated during learning.

The paper is quite close to what we are envisaging. The free energy principle could add a Bayesian perspective (we would have to find a way to implement the conditional progression of a sequence, but I don’t see a reason why this shouldn’t be possible). Apart from that the function implemented by the dynamics is extremely simple. In fact, the whole sequential system could be replaced with simple, discrete if-then logic without having to change the continuous dynamics of the robot implementation layer (color-space field and motor dynamics). I don’t see how continuous dynamics here helps except that it is more biologically plausible. This is also a point on which the authors focus in the introduction and discussion. Something else that I noticed: all dynamic variables are only 1D (except for the colour-space field which is 2D). This is probably because the DFT formalism requires that the activity over the field is integrated for each position in the field every simulation step to compute the changes in activity (cf. computation of expectations in Bayesian inference) which is probably infeasible when the representations contain several variables.

Cortical Preparatory Activity: Representation of Movement or First Cog in a Dynamical Machine?

Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Ryu, S. I., and Shenoy, K. V.
Neuron, 68:387 – 400, 2010
DOI, Google Scholar

Abstract

Summary The motor cortices are active during both movement and movement preparation. A common assumption is that preparatory activity constitutes a subthreshold form of movement activity: a neuron active during rightward movements becomes modestly active during preparation of a rightward movement. We asked whether this pattern of activity is, in fact, observed. We found that it was not: at the level of a single neuron, preparatory tuning was weakly correlated with movement-period tuning. Yet, somewhat paradoxically, preparatory tuning could be captured by a preferred direction in an abstract #space# that described the population-level pattern of movement activity. In fact, this relationship accounted for preparatory responses better than did traditional tuning models. These results are expected if preparatory activity provides the initial state of a dynamical system whose evolution produces movement activity. Our results thus suggest that preparatory activity may not represent specific factors, and may instead play a more mechanistic role.

Review

What are the variables that best explain the preparatory tuning of neurons in dorsal premotor and primary motor cortex of monkeys doing a reaching task? This is the core question of the paper which is motivated by the observation of the authors that preparatory and perimovement (ie. within movement) activity of a single neuron may even qualitatively differ considerably (something conflicting with the view that preparatory activity is a subthreshold version of perimovment activity). This observation is experimentally underlined in the paper by showing that average preparatory activity and average perimovement activity of a single neuron are largely uncorrelated for different experimental conditions.

To quantify the suitability of a set of variables to explain perparatory activity of a neuron the authors use a linear regression approach in which the values of these variables for a given experimental condition are used to predict the firing rate of the neuron in that condition. The authors compute the generalisation error of the learnt linear model with crossvalidation and compare the performance of several sets of variables based on this error. The variables performing best are the principal component scores of the perimovement population activity of all recorded neurons. The difference to alternative sets of variables is significant and in particular the wide range of considered variables makes the result convincing (e.g. target position, initial velocity, endpoints and maximum speed, but also principal component scores of EMG activity and kinematic variables, i.e. position, speed and acceleration of the hand). That perimovement activity is the best regressor for preparatory activity is quite odd, or as Burak aptly put it: “They are predicting the past.”

The authors suggest a dynamical systems view as explanation for their results and hypthesise that preparatory activity sets the initial state of the dynamical system constituted by the population of neurons. In this view, the preparatory activity of a single neuron is not sufficient to predict its evolution of activity (note that the correlation between perparatory and perimovement activity assesses only one particular way of predicting perimovement from preparatory activity – scaling), but the evolution of activity of all neurons can be used to determine the preparatory activity of a single neuron under the assumption that the evolution of activity is governed by approximately linear dynamics. If the dynamics is linear, then any state in the future is a linear transformation of the initial state and given enough data points from the future the initial state can be determined by an appropriate linear inversion. The additional PCA, also a linear transformation, doesn’t change that, but makes the regression easier and, important for the noisy data, also regularises.

These findings and suggestions are all quite interesting and certainly fit into our preconceptions about neuronal activity, but are the presented results really surprising? Do people still believe that you can make sense of the activity of isolated neurons in cortex, or isn’t it already accepted that population dynamics is necessary to characterise neuronal responses? For example, Pillow et al. (Pillow2008) used coupled spiking models to successfully predict spike trains directly from stimuli in retinal ganglion cells. On the other hand, Churchland et al. indirectly claim in this paper that the population dynamics is (approximately) linear, which is certainly disputable, but what would nonlinear dynamics mean for their analysis?

Encoding of Motor Skill in the Corticomuscular System of Musicians.

Gentner, R., Gorges, S., Weise, D., aufm Kampe, K., Buttmann, M., and Classen, J.
Current Biology, 20:1869-1874
 , 2010
DOI, Google Scholar

Abstract

Summary How motor skills are stored in the nervous system represents a fundamental question in neuroscience. Although musical motor skills are associated with a variety of adaptations [[1], [2] and [3]], it remains unclear how these changes are linked to the known superior motor performance of expert musicians. Here we establish a direct and specific relationship between the functional organization of the corticomuscular system and skilled musical performance. Principal component analysis was used to identify joint correlation patterns in finger movements evoked by transcranial magnetic stimulation over the primary motor cortex while subjects were at rest. Linear combinations of a selected subset of these patterns were used to reconstruct active instrumental playing or grasping movements. Reconstruction quality of instrumental playing was superior in skilled musicians compared to musically untrained subjects, displayed taxonomic specificity for the trained movement repertoire, and correlated with the cumulated long-term training exposure, but not with the recent past training history. In violinists, the reconstruction quality of grasping movements correlated negatively with the long-term training history of violin playing. Our results indicate that experience-dependent motor skills are specifically encoded in the functional organization of the primary motor cortex and its efferent system and are consistent with a model of skill coding by a modular neuronal architecture [4].

Review

The authors use PCA on TMS induced postures to show that motor cortex represents building blocks of movements which adapt to everyday requirements. To be precise, the authors recorded finger movements which were induced by TMS over primary motor cortex and extracted for each of the different stimulations the posture which had the largest deviation from rest. From the resulting set of postures they computed the first 4 principal components (PCs) and looked how well a linear combination of the PCs could reconstruct postures recorded during normal behaviour of the subjects. This is made more interesting by comparing groups of subjects with different motor experience. They use highly trained violinists and pianists and a group of non-musicians and then compare the different combinations of who is used for determining PCs and what is trying to be reconstructed (violin playing, piano playing, or grasping where grasping can be that of violinists or non-musicians). Basis of comparison is a correlation (R) between the series of joint angle vectors as defined in Shadmehr1994 which can be interpreted as something like the average correlation between data points of the two sequences measured across joint angles (cf. normalised inner product matrix in GPLVM). Don’t ask me why they take exactly this measure, but probably it doesn’t matter. The main finding is that the PCs from violinists are significantly better in reconstructing violin playing than either the piano PCs, or the non-musician PCs. This table is missing in the text (but the data is there, showing mean R and its standard deviation):

R violinists pianists non-musicians

violin 0.69+0.09 0.63+0.11 0.64+0.09

piano 0.70+0.06 0.74+0.06 0.70+0.07

grasp 0.76+0.09 0.76+0.09 0.76+0.10

what is not discussed in the paper is that pianists’ PCs are worse in reconstructing violin playing than PCs of non-musicians. An interesting finding is that the years of intensive training of violinists correlates significantly with the reconstruction quality for violin playing of violinist PCs while it is anticorrelated with the reconstruction quality for grasping indicating that the postures activated in primary motor cortex become more adapted to frequently executed tasks. However, it has to be noted that this correlation analysis is based on only 9 data points.

In the beginning of the paper they show an analysis of the recorded behaviour which simply is supposed to ensure that violin playing, piano playing and grasping movements are sufficiently different which we may believe, although piano playing and grasping apparently are somewhat similar.

Generating coherent patterns of activity from chaotic neural networks.

Sussillo, D. and Abbott, L. F.
Neuron, 63:544–557, 2009
DOI, Google Scholar

Abstract

Neural circuits display complex activity patterns both spontaneously and when responding to a stimulus or generating a motor output. How are these two forms of activity related? We develop a procedure called FORCE learning for modifying synaptic strengths either external to or within a model neural network to change chaotic spontaneous activity into a wide variety of desired activity patterns. FORCE learning works even though the networks we train are spontaneously chaotic and we leave feedback loops intact and unclamped during learning. Using this approach, we construct networks that produce a wide variety of complex output patterns, input-output transformations that require memory, multiple outputs that can be switched by control inputs, and motor patterns matching human motion capture data. Our results reproduce data on premovement activity in motor and premotor cortex, and suggest that synaptic plasticity may be a more rapid and powerful modulator of network activity than generally appreciated.

Review

The authors present a new way of reservoir computing. The setup apparently (haven’t read the paper) is very similar to the echo state networks of Herbert Jaeger (Jaeger and Haas, Science, 2004); the difference being the signal that is fed back to the reservoir from the output. While Jaeger fed back the target value f(t), they feed back the error between f(t) and the prediction given the current weights and reservoir activity. Key to their approach then is that they use a weight update rule which almost instantaneously provides weights that minimise the error. While this obviously leads to a very high variability of the weights across time steps at the start of learning, they argue that this variability diminishes during learning and weights eventually stabilise such that, when learning is switched off, the target dynamics is reproduced. They present a workaround which may make it possible to also learn non-periodic functions, but it’s clearly better suited for periodic functions.

I wonder how the learning is divided between feedback mechanism and weight adaptation (network model of Fig. 1A). In particular, it could well be that the feedback mechanism is solely responsible for successfull learning while the weights just settle to a more or less arbitrary setting once the dynamics is stabilised through the feedback (making weights uninterpretable). The authors also report how the synapses within the reservoir can be adapted to reproduce the target dynamics when no feedback signal is given from the network output (structure in Fig. 1C). Curiously, the credit assignment problem is solved by ignoring it: for the adaptation of reservoir synapses the same network level output error is used as for the adaptation of output weights.

It’s interesting that it works, but to know why and how it works would be good. The main argument of the authors why their proposal is better than echo state networks is that their proposal is more stable. They present corresponding results in Fig. 4, but they never tell us what they mean by stable. So how stable are the dynamics learnt by FORCE? How much can you perturb the network dynamics before it stops being able to reproduce the target dynamics. In other words, how far off the desired dynamics can you initialise the network state?

They have an interesting principal components analysis of network activity suggesting that the dynamics converges to the same values for the first principal components for different starting states, but I haven’t understood it well enough during this first read to comment further on that.

Modeling discrete and rhythmic movements through motor primitives: a review.

Degallier, S. and Ijspeert, A.
Biol Cybern, 103:319–338, 2010
DOI, Google Scholar

Abstract

Rhythmic and discrete movements are frequently considered separately in motor control, probably because different techniques are commonly used to study and model them. Yet the increasing interest in finding a comprehensive model for movement generation requires bridging the different perspectives arising from the study of those two types of movements. In this article, we consider discrete and rhythmic movements within the framework of motor primitives, i.e., of modular generation of movements. In this way we hope to gain an insight into the functional relationships between discrete and rhythmic movements and thus into a suitable representation for both of them. Within this framework we can define four possible categories of modeling for discrete and rhythmic movements depending on the required command signals and on the spinal processes involved in the generation of the movements. These categories are first discussed in terms of biological concepts such as force fields and central pattern generators and then illustrated by several mathematical models based on dynamical system theory. A discussion on the plausibility of theses models concludes the work.

Review

In the first part, the paper reviews experimental evidence for the existence of a motor primitive system located on the level of the spinal cord. In particular, the discussion is centred on the existence of central pattern generators and force fields (also: muscle synergies) defined in the spinal cord. Results showing the independence of these from cortical signals exist for animals up to the cat, or so. “In humans, the activity of the isolated spinal cord is not observable, […]: influences from higher cortical areas and from sensory pathways can hardly be excluded.”

The remainder of the article reviews dynamical systems that have been proposed as models for movement primitives. The models are roughly characterised according to the assumptions about the relationships between discrete and rhythmic movements. The authors define 4 categories: two/two, one/two, one/one and two/one, where a two means separate systems for discrete and rhythmic movements, a one means a common system, the number before the slash corresponds to the planning process (signals potentially generated as motor commands from cortex) and the number behind the slash corresponds to the execution system where the movement primitives are defined.

You would think that the aim of this excercise is to work out advantages and disadvantages of the models, but the authors mainly restrict themselves to describing the models. The main conclusion then is that discrete and rhythmic movements can be generated from movement primitives in the spinal cord while cortex may only provide simple, non-patterned commands. The proposed categorisation may help to discern models experimentally, but apparently there is currently no conclusive evidence favouring any of the categories (authors repeatedly cite two conflicting studies).

Winnerless competition between sensory neurons generates chaos: A possible mechanism for molluscan hunting behavior.

Varona, P., Rabinovich, M. I., Selverston, A. I., and Arshavsky, Y. I.
Chaos: An Interdisciplinary Journal of Nonlinear Science, 12:672–677, 2002
DOI, Google Scholar

Abstract

In the presence of prey, the marine mollusk Clione limacina exhibits search behavior, i.e., circular motions whose plane and radius change in a chaotic-like manner. We have formulated a dynamical model of the chaotic hunting behavior of Clione based on physiological in vivo and in vitro experiments. The model includes a description of the action of the cerebral hunting interneuron on the receptor neurons of the gravity sensory organ, the statocyst. A network of six receptor model neurons with Lotka-Volterra-type dynamics and nonsymmetric inhibitory interactions has no simple static attractors that correspond to winner take all phenomena. Instead, the winnerless competition induced by the hunting neuron displays hyperchaos with two positive Lyapunov exponents. The origin of the chaos is related to the interaction of two clusters of receptor neurons that are described with two heteroclinic loops in phase space. We hypothesize that the chaotic activity of the receptor neurons can drive the complex behavior of Clione observed during hunting.

Review

see Levi2005 for short summary in context

My biggest concern with this paper is that the changes in direction of the mollusc may also result from feedback from the body and especially the stratocysts during its accelerated swimming. The question is, are these direction changes a result of chaotic, but deterministic dynamics in the sensory network as suggested by the model, or are they a result of essentially random processes which may be influenced by feedback from other networks? The authors note that in their model “The neurons keep the sequence of activation but the interval in which they are active is continuously changing in time”. After a day of search for papers which have investigated the swimming behaviour of Clione limacina (the mollusc in question) I came to the conclusion that the data schown in Fig. 1 likely is the only data set of swimming behaviour that was published. This small data set suggests random changes in direction, in contrast to the model, but it does not allow to draw any definite conclusions about the repetitiveness of direction changes.

The role of sensory network dynamics in generating a motor program.

Levi, R., Varona, P., Arshavsky, Y. I., Rabinovich, M. I., and Selverston, A. I.
J Neurosci, 25:9807–9815, 2005
DOI, Google Scholar

Abstract

Sensory input plays a major role in controlling motor responses during most behavioral tasks. The vestibular organs in the marine mollusk Clione, the statocysts, react to the external environment and continuously adjust the tail and wing motor neurons to keep the animal oriented vertically. However, we suggested previously that during hunting behavior, the intrinsic dynamics of the statocyst network produce a spatiotemporal pattern that may control the motor system independently of environmental cues. Once the response is triggered externally, the collective activation of the statocyst neurons produces a complex sequential signal. In the behavioral context of hunting, such network dynamics may be the main determinant of an intricate spatial behavior. Here, we show that (1) during fictive hunting, the population activity of the statocyst receptors is correlated positively with wing and tail motor output suggesting causality, (2) that fictive hunting can be evoked by electrical stimulation of the statocyst network, and (3) that removal of even a few individual statocyst receptors critically changes the fictive hunting motor pattern. These results indicate that the intrinsic dynamics of a sensory network, even without its normal cues, can organize a motor program vital for the survival of the animal.

Review

The authors investigate the neural mechanisms of hunting behaviour in a mollusk. It’s simplicity allows that the nervous system can be completely stripped apart from the rest of the body and be investigated in isolation from the body, but as a whole. In particular, the authors are interested in the causal influence of sensory neurons on motor activity.

The mollusk has two types of behaviour for positioning its body in the water: 1) it uses gravitational sensors (statocysts) to maintain a head-up position in the water under normal circumstances and 2) it swims in apparently chaotic, small loops when it suspects prey in its vicinity (searching). In this paper the authors present evidence that the searching behaviour 2) is still largely dependent on the (internal) dynamics of the statocysts.

The model is as follows (see Varona2002): without prey inhibitory connections between sensory cells in the stratocysts make sure that only a small proportion of cells are firing (those that are activated by mechanoreceptors according to gravitation acting on a stone-like structure in the statocysts), but when prey is in the vicinity of the mollusk (as indicated by e.g. chemoreceptors) cerebral hunting neurons additionally excite the statocyst cells inducing chaotic dynamics between them. The important thing to note is that then the statocysts still influence motor behaviour as shown in the paper. So the argument is that the same mechanism for producing motor output dependent on statocyst signals can be used to generate searching just through changing the activity of the sensory neurons.

Overall the evidence presented in the paper is convincing that statocyst activity influences the activity of the motor neurons also in the searching behaviour, but it cannot be said concludingly that the statocysts are necessary for producing the swimming, because the setup allowed only the activity of motor neurons to be observed without actually seeing the behaviour (actually Levi2004 show that the typical searching behaviour cannot be produced when the statocysts are removed). For the same reason, the experiments also neglected possible feedback mechanisms between body/mollusk and environment, e.g. in the statocyst activity due to changing gravitational state, i.e. orientation. The argument there is, though not explicitly stated, that the statocyst stops computing the actual orientation of the body, but is purely driven through its own dynamics. Feedback from the peripheral motor system is not modelled (Varona2002, argueing that for determining the origin of the apparent chaotic behaviour this is not necessary).

For us this is a nice example for how action can be a direct consequence of perception, but even more so that internal sensory dynamics can produce differentiated motor behaviour. The connection between sensory states and motor activity is relatively fixed, but different motor behaviour may be generated by different processing in the sensory system. The autonomous dynamics of the statocysts in searching behaviour may also be interpreted as being induced from different, high-precision predictions on a higher level. It may be questioned how good a model the mollusk nervous system is for information processing in the human brain, but maybe they share these principles.