SORN: a self-organizing recurrent neural network.

Lazar, A., Pipa, G., and Triesch, J.
Front Comput Neurosci, 3:23, 2009
DOI, Google Scholar

Abstract

Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network’s success.

Review

The paper considers the question of whether adapting an RNN used as a reservoir gives better performance in a sequence prediction task than randomly initialised RNNs. The authors demonstrate an adaptation procedure based on spike-timing-dependent plasticity (STDP) controlled with intrinsic plasticity (IP) and synaptic normalisation (SN) as homeostatic mechanisms and show that the performance of the adapted RNNs is indeed superior to the performance of the random RNNs. They further show that IP and SN are necessary for good results, or rather that without either the RNN exhibits disadvantageous firing patterns (bursting, always on, always off).

This is one of the few studies which shows successfull learning of RNNs. However, they use a rather simple model: a binary network in discrete time. The connectivity of the network is more elaborate: there are excitatory units which are recurrently connected, as well as fewer inhibitory neurons which have no connections between themselves, but are fully and reciprocally connected with all excitatory units. Input to the network is given to excitatory units through input units which are separated into subsets which each give a spike (1) when a specific symbol in the input sequence is currently present (input sequences consist of letters and numbers). The authors show that the RNN develops states (activity of all units in the network as a vector) which are specific to individual input symbols with the addition that also the serial number of the input symbol in the sequence is represented. This simplifies readout of the current symbol in the sequence from RNN activity and hence leads to improved performance of predicting the next symbol in the sequence using a standard reservoir computing readout function. However, the authors note that the RNN keeps on changing its response to input, i.e., their learning rule does not converge which means that the readout function would have to be updated all the time as well. Consequently, they switch off learning in the test phase.

The authors show that it is beneficial that recurrent connections between excitatory units are sparse.

Leave a Reply

Your email address will not be published. Required fields are marked *