Spike-Based Population Coding and Working Memory.

Boerlin, M. and Denève, S.
PLoS Comput Biol, 7:e1001080, 2011
DOI, Google Scholar

Abstract

Abstract

Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.

Author Summary

Most of our daily actions are subject to uncertainty. Behavioral studies have confirmed that humans handle this uncertainty in a statistically optimal manner. A key question then is what neural mechanisms underlie this optimality, i.e. how can neurons represent and compute with probability distributions. Previous approaches have proposed that probabilities are encoded in the firing rates of neural populations. However, such rate codes appear poorly suited to understand perception in a constantly changing environment. In particular, it is unclear how probabilistic computations could be implemented by biologically plausible spiking neurons. Here, we propose a network of spiking neurons that can optimally combine uncertain information from different sensory modalities and keep this information available for a long time. This implies that neural memories not only represent the most likely value of a stimulus but rather a whole probability distribution over it. Furthermore, our model suggests that each spike conveys new, essential information. Consequently, the observed variability of neural responses cannot simply be understood as noise but rather as a necessary consequence of optimal sensory integration. Our results therefore question strongly held beliefs about the nature of neural “signal” and “noise”.

Review

[note: I here often write posterior, but mean log-posterior as this is what the authors mostly compute with]

Boerlin and Deneve present a recurrent spiking neural network which integrates dynamically changing stimuli from different modalities, allows for simple readout of the complete posterior distribution, predicts state dynamics and, therefore, may act as a working memory when a stimulus is absent. Interestingly, spikes in the recurrent neural network (RNN) are generated deterministically, but from an outside perspective interspike intervals of individual neurons appear to follow a Poisson distribution as measured experimentally. How is all this achieved and what are the limitations?

The experimental setup is as follows: There is a ONE-dimensional, noisy, dynamic variable in the world (state from here on) which we want to track through time. However, observations are only made through noisy spike trains from different sensory modalities where the conditional probability of a spike given a particular state is modelled as a Poisson distribution (actually exponential family distr. but in the experiments they use a Poisson). The RNN receives these spikes as input and the question then is how we have to setup the dynamics of each neuron in the RNN such that a simple integrator can readout the posterior distribution of the state from RNN activities.

The main trick of the paper is to find an approximation of the true (log-)posterior L which in turn may be approximated using the readout posterior G under the assumption that the two are good approximations of each other. You recognise the circularity in this statement. This is resolved by using a spiking mechanism which ensures that the two are indeed close to each other which in turn ensures that the true posterior L is approximated. The rest is deriving formulae and substituting them in each other until you get a formula describing the (dynamics of the) membrane potential of a single neuron in the RNN which only depends on sensory and RNN spikes, the tuning curves or gains of the associated neurons, rate constants of the network (called leaks here) and (true) parameters of the state dynamics.

The approximations used for the (log-)posterior are a Taylor expansion of 2nd order, a subsequent Taylor expansion of 1st order and a discretisation of the posterior according to the preferred state of each RNN neuron. However, the most critical assumption for the derivation of the results is that the dynamics is 1st order Markovian and linear. In particular, they assume a state dynamics which has a constant drift and a Wiener process diffusion. In the last paragraph of the discussion they mention that it is straightforward to extend the model to state dependent drift, but I don’t follow how this could be done, because their derivation of L crucially depends on the observation that p(x_t|x_{t-dt}) = p(x_t – x_{t-dt}) which is only true for state-independent drift.

The resulting membrane potential has a form corresponding to a leaky integrate and fire neuron. The authors differentiate between 4 parts: a leakage current, feed-forward input from sensory neurons (containing a bias term which, I think, is wrong in Materials and Methods but which is also not used in the experiments), instantaneous recurrent input from the RNN and slow recurrent currents from the RNN which are responsible for keeping up a memory of the approximated posterior past the time constant of the neuron. The slow currents are defined by two separate differential equations and I wonder where these are implemented in the neuron, if it already has a membrane potential associated with it to which the slow currents contribute. Also interesting to note is that all terms except for the leakage current are modulated by the RNN spike gains (Gamma) defining which effect a spike of neuron i has on the readout of the approximate posterior at the preferred state of neuron j. This includes the feed-forward input and means that feed-forward connection weights are determined by a linear combination of posterior gains (Gamma) and gains defined by the conditional probability of sensory spikes given the state (H). This means that the feed-forward weights are tuned to also take the effect of an input spike on the readout into account?

Anyway, the resulting spiking mechanism makes neurons spike whenever they improve the readout of the posterior from the RNN. The authors interpret this as a prediction error signal: a spike indicates that the posterior represented by the RNN deviated from the true (approximated) posterior. I guess we can call this prediction, because the readout/posterior has dynamics. But note that it is hard to interpret individual spikes with respect to prediction errors of the input spike train (something not desired anyway?). Also, the authors show that this representation is highly redundant. There always exist alternative spike trains of the RNN which represent the same posterior. This results in the demonstrated robustness and apparent randomness of the coding scheme. However, it also makes it impossible to interpret what it means when a neuron is silent. Nevertheless, neurons still exhibit characteristic tuning curves on average.

Notice that they do not assume a distributional form of the posterior and indeed they show that the network can represent a bimodal posterior, too.

In summary, the work at hand impressively combines many important aspects of recognising dynamic stimuli in a spike-based framework. Probably the most surprising property of the suggested neural network is that it produces spikes deterministically in order to optimise a global criterion although with a local spiking rule. However, the authors have to make important assumptions to arrive at these results. In particular, they need constant drift dynamics for their derivations, but also the “local” spiking rule turns out to use some global information: the weights of input and recurrently connected neurons in the membrane potential dynamics of an RNN neuron are determined from the gains for the readout of every neuron in the network, i.e., each neuron needs to know what a spike of each other neuron contributes to the posterior. I wonder what a corresponding learning rule would look like. Additionally, they need to assume that the RNN is fully connected, i.e., that every neuron, which contributes to the posterior, sends messages (spikes) to all other neurons contributing to the posterior. The authors also do not explain how the suggested slow, recurrent currents are represented in a spiking neuron. After all, these currents seem to have dynamics independent from the membrane potential of the neuron, yet they implement the dynamics of the posterior and are, therefore, absolutely central for predicting the development of the posterior over time. Finally, we have to keep in mind that the population of neurons coded for a discretisation of the posterior of a one-dimensional variable. With increasing dimensionality you’ll therefore have to spend an exponentially increasing number of neurons to represent the posterior and all of them will have to be connected.

Leave a Reply

Your email address will not be published. Required fields are marked *