The majority of synapses in the mammalian cortex originate from cortical neurons. Indeed, the largest input to cortical cells comes from neighboring excitatory cells. However, most models of cortical development and processing do not reflect the anatomy and physiology of feedback excitation and are restricted to serial feedforward excitation. This report describes how populations of neurons in cat visual cortex can use excitatory feedback, characterized as an effective “network conductance”, to amplify their feedforward input signals and demonstrates how neuronal discharge can be kept proportional to stimulus strength despite strong, recurrent connections that threaten to cause runaway excitation. These principles are incorporated into models of cortical direction and orientation selectivity that emphasize the basic design principles of cortical architectures.
The paper suggests that the functional role of recurrent excitatory connections is to amplify (increase gain between inputs and outputs) and denoise inputs to a (sensory) cortical area. This would allow these input signals to be relatively small and would, therefore, help to save energy (they don’t make this argument explicitly).
The work is motivated by an estimate of the number of recurrent connections directly made between spiny stellate cells of layer IV in the cat visual cortex. The authors conclude that these connections alone can already “provide a significant source of recurrent excitation”.
First, they consider an electronic circuit analogy describing the feed-forward input and recurrent currents acting on a neuron in the network. They look at the influence of the recurrent conductance (can be seen as the connectivity strength between all recurrently connected neurons) on the stability of the network and suggest that inhibitory neurons keep the network stable when the recurrent conductance is too high and would alone lead to divergence of network activities. However, they also implemented a model recurrent network consisting of excitatory and inhibitory spiking neurons and showed that it can implement direction selectivity of V1 simple cells. Interestingly, direction selectivity is based on asymmetric firing of excitatory and inhibitory connections from LGN (“in the preferred direction excitation precedes inhibition”) which they support with two references.
I find it hard to believe that cortical recurrent networks apparently don’t do any computations on their own except for improving the incoming signal. It means that all computations are actually done in the feed forward connections between areas. The excitation-inhibition asynchrony being an example here. But then, if you assume a hierarchy of similar processing units, where does, e.g., the necessary excitation-inhibition asynchrony come from? Well, potentially there are readout-neurons outside of the recurrently connected network which do exactly that. Then again, the whole processing in the brain would be feed-forward where the only intrinsically dynamic units would just amplify the feed-forward signals. Reservoir computing could be seen as an extension to this where the dynamics of the recurrent neurons is allowed to be more sophisticated, but becomes uninterpretable in turn. Still, the presented model is consistent, as far as I can tell, with the idea that the activity in response to a stimulus represents the posterior while activity at rest represents the prior over the variables represented by the network under consideration.
Note that the authors do not have any direct experimental evidence for their model in terms of simultaneous recordings of neurons in the same network. They only compare two summary statistics based on individual cells, for the second of which I don’t understand the experiment.