Efficient Reductions for Imitation Learning.

Ross, S. and Bagnell, D.
in: JMLR W&CP 9: AISTATS 2010, pp. 661–668, 2010
Google Scholar

Abstract

Imitation Learning, while applied successfully on many large real-world problems, is typically addressed as a standard supervised learning problem, where it is assumed the training and testing data are i.i.d.. This is not true in imitation learning as the learned policy influences the future test inputs (states) upon which it will be tested. We show that this leads to compounding errors and a regret bound that grows quadratically in the time horizon of the task. We propose two alternative algorithms for imitation learning where training occurs over several episodes of interaction. These two approaches share in common that the learner’s policy is slowly modified from executing the expert’s policy to the learned policy. We show that this leads to stronger performance guarantees and demonstrate the improved performance on two challenging problems: training a learner to play 1) a 3D racing game (Super Tux Kart) and 2) Mario Bros.; given input images from the games and corresponding actions taken by a human expert and near-optimal planner respectively.

Review

The authors note that previous approaches of learning a policy from an example policy are limited in the sense that they only see successful examples generated from the desired policy and, therefore, will exhibit a larger error than expected from supervised learning of independent samples, because an error can propagate through the series of decisions, if the policy hasn’t learnt to recover to the desired policy when an error occurred. They then show that a lower error can be expected when a Forward Algorithm is used for training which learns a non-stationary policy successively for each time step. The idea probably being (I’m not too sure) that the data at the time step that is currently learnt contains the errors (that lead to different states) you would usually expect from the learnt policies, because for every time step new data is sampled based on the already learnt policies. They transfer this idea to learning of a stationary policy and propose SMILe (stochastic mixing iterative learning). In this algorithm the stationary policy is a linear combination of policies learnt in previous iterations where the initial policy is the desired one. The influence of the desired policy decreases exponentially with the number of iterations, but also the weights of policies learnt later decrease exponentially, but stay fixed in subsequent iterations, i.e. the policies learnt first will have the largest weights eventually. This makes sense, because they will most probably be closest to the desired policy (seeing mostly samples produced from the desired policy).

The aim is to make the learnt policy more robust without using too many samples from the desired policy. I really wonder whether you could achieve exactly the same performance by simply additionally sampling the desired policy from randomly perturbed states and adding these as training points to learning of a single policy. Depending on how expensive your learning algorithm is this may be much faster in total (as you only have to learn once on a larger data set). Of course, you then may not have the theoretical guarantees provided in the paper. Another drawback of the approach presented in the paper is that it needs to be possible to sample from the desired policy interactively during the learning. I can’t imagine a scenario where this is practical (a human in the loop?).

I was interested in this, because in an extended abstract to a workshop (see attached files) the authors referred to this approach and also mentioned Langford2009 as a similar learning approach based on local updates. Also you can see the policy as a differential equation, i.e. the results of the paper may also apply to learning of dynamical systems without control inputs. The problems are certainly very similar.

They use a neural network to learn policies in the particular application they consider.

Generating coherent patterns of activity from chaotic neural networks.

Sussillo, D. and Abbott, L. F.
Neuron, 63:544–557, 2009
DOI, Google Scholar

Abstract

Neural circuits display complex activity patterns both spontaneously and when responding to a stimulus or generating a motor output. How are these two forms of activity related? We develop a procedure called FORCE learning for modifying synaptic strengths either external to or within a model neural network to change chaotic spontaneous activity into a wide variety of desired activity patterns. FORCE learning works even though the networks we train are spontaneously chaotic and we leave feedback loops intact and unclamped during learning. Using this approach, we construct networks that produce a wide variety of complex output patterns, input-output transformations that require memory, multiple outputs that can be switched by control inputs, and motor patterns matching human motion capture data. Our results reproduce data on premovement activity in motor and premotor cortex, and suggest that synaptic plasticity may be a more rapid and powerful modulator of network activity than generally appreciated.

Review

The authors present a new way of reservoir computing. The setup apparently (haven’t read the paper) is very similar to the echo state networks of Herbert Jaeger (Jaeger and Haas, Science, 2004); the difference being the signal that is fed back to the reservoir from the output. While Jaeger fed back the target value f(t), they feed back the error between f(t) and the prediction given the current weights and reservoir activity. Key to their approach then is that they use a weight update rule which almost instantaneously provides weights that minimise the error. While this obviously leads to a very high variability of the weights across time steps at the start of learning, they argue that this variability diminishes during learning and weights eventually stabilise such that, when learning is switched off, the target dynamics is reproduced. They present a workaround which may make it possible to also learn non-periodic functions, but it’s clearly better suited for periodic functions.

I wonder how the learning is divided between feedback mechanism and weight adaptation (network model of Fig. 1A). In particular, it could well be that the feedback mechanism is solely responsible for successfull learning while the weights just settle to a more or less arbitrary setting once the dynamics is stabilised through the feedback (making weights uninterpretable). The authors also report how the synapses within the reservoir can be adapted to reproduce the target dynamics when no feedback signal is given from the network output (structure in Fig. 1C). Curiously, the credit assignment problem is solved by ignoring it: for the adaptation of reservoir synapses the same network level output error is used as for the adaptation of output weights.

It’s interesting that it works, but to know why and how it works would be good. The main argument of the authors why their proposal is better than echo state networks is that their proposal is more stable. They present corresponding results in Fig. 4, but they never tell us what they mean by stable. So how stable are the dynamics learnt by FORCE? How much can you perturb the network dynamics before it stops being able to reproduce the target dynamics. In other words, how far off the desired dynamics can you initialise the network state?

They have an interesting principal components analysis of network activity suggesting that the dynamics converges to the same values for the first principal components for different starting states, but I haven’t understood it well enough during this first read to comment further on that.