In this Primer, we shall describe one interesting property of neocortical circuits – recurrent connectivity – and suggest what its computational significance might be.
First, they use data of the distribution of synapses in cat visual cortex to argue that the predominant drive of activity in a cortical area is from recurrent connections within this area. They then suggest that the reason for this is the ability to enhance and denoise incoming signals through suitable recurrent connections. They show corresponding functional behaviour in a model based on linear threshold neurons (LTNs). They do not use sigmoid activation functions, because neurons apparently only rarely operate on their maximum firing rate such that sigmoid activation functions are not necessary. To maintain stability they instead use a global inhibitory unit. I guess you could equivalently use a suitable sigmoid function. Finally they suggest that top-down connections may bias the activity in the recurrent network such that one of a few alternative inputs may be selected based on, e.g., attention.
So here the functional role of the recurrent neural network is merely to increase the signal to noise ratio. It’s a bit strange to me that actually no computation is done. Does that mean that all the computation from sensory signals to hidden states are done by the projections from lower level area to higher level area? This seems to be consistent with the reservoir computing idea where the reservoir can also be seen as enhancing the representation of the input (by stretching its effects in time). The difference just being that the dynamics and function in reservoirs is more involved.
The ideas presented here are almost the same as already proposed by the first author in 1995 (see Douglas1995).