Efficient Reductions for Imitation Learning.

Ross, S. and Bagnell, D.
in: JMLR W&CP 9: AISTATS 2010, pp. 661–668, 2010
Google Scholar

Abstract

Imitation Learning, while applied successfully on many large real-world problems, is typically addressed as a standard supervised learning problem, where it is assumed the training and testing data are i.i.d.. This is not true in imitation learning as the learned policy influences the future test inputs (states) upon which it will be tested. We show that this leads to compounding errors and a regret bound that grows quadratically in the time horizon of the task. We propose two alternative algorithms for imitation learning where training occurs over several episodes of interaction. These two approaches share in common that the learner’s policy is slowly modified from executing the expert’s policy to the learned policy. We show that this leads to stronger performance guarantees and demonstrate the improved performance on two challenging problems: training a learner to play 1) a 3D racing game (Super Tux Kart) and 2) Mario Bros.; given input images from the games and corresponding actions taken by a human expert and near-optimal planner respectively.

Review

The authors note that previous approaches of learning a policy from an example policy are limited in the sense that they only see successful examples generated from the desired policy and, therefore, will exhibit a larger error than expected from supervised learning of independent samples, because an error can propagate through the series of decisions, if the policy hasn’t learnt to recover to the desired policy when an error occurred. They then show that a lower error can be expected when a Forward Algorithm is used for training which learns a non-stationary policy successively for each time step. The idea probably being (I’m not too sure) that the data at the time step that is currently learnt contains the errors (that lead to different states) you would usually expect from the learnt policies, because for every time step new data is sampled based on the already learnt policies. They transfer this idea to learning of a stationary policy and propose SMILe (stochastic mixing iterative learning). In this algorithm the stationary policy is a linear combination of policies learnt in previous iterations where the initial policy is the desired one. The influence of the desired policy decreases exponentially with the number of iterations, but also the weights of policies learnt later decrease exponentially, but stay fixed in subsequent iterations, i.e. the policies learnt first will have the largest weights eventually. This makes sense, because they will most probably be closest to the desired policy (seeing mostly samples produced from the desired policy).

The aim is to make the learnt policy more robust without using too many samples from the desired policy. I really wonder whether you could achieve exactly the same performance by simply additionally sampling the desired policy from randomly perturbed states and adding these as training points to learning of a single policy. Depending on how expensive your learning algorithm is this may be much faster in total (as you only have to learn once on a larger data set). Of course, you then may not have the theoretical guarantees provided in the paper. Another drawback of the approach presented in the paper is that it needs to be possible to sample from the desired policy interactively during the learning. I can’t imagine a scenario where this is practical (a human in the loop?).

I was interested in this, because in an extended abstract to a workshop (see attached files) the authors referred to this approach and also mentioned Langford2009 as a similar learning approach based on local updates. Also you can see the policy as a differential equation, i.e. the results of the paper may also apply to learning of dynamical systems without control inputs. The problems are certainly very similar.

They use a neural network to learn policies in the particular application they consider.

The Neural Costs of Optimal Control.

Gershman, S. J. and Wilson, R. C.
in: Advances in Neural Information Processing Systems 23, 2010
Google Scholar

Abstract

Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a variational framework for achieving this balance and apply it to the problem of how a neural population code should optimally represent a distribution under resource constraints. The essence of our analysis is the conjecture that population codes are organized to maximize a lower bound on the log expected utility. This theory can account for a plethora of experimental data, including the reward-modulation of sensory receptive fields, GABAergic effects on saccadic movements, and risk aversion in decisions under uncertainty.

Review

Within the area of decision theory they consider the problem of evaluating the expected utility of an action given a posterior. They propose a variational framework analogously to the one used for the EM algorithm in which the utility replaces the likelihood and the posterior replaces the prior. Their main contribution is to include a cost penalising the complexity of the approximation of the posterior and use the so defined lower bound on the expected utility to simultaneously optimise the density used to approximate the posterior. As the utility does not only contain the cost of the approximation, but also the actual utility of an action in the considered states, this model predicts that the approximating density should also reflect what is behaviourally relevant instead of only trying to represent the natural posterior distribution optimally.

In the results section they then show that under this model the approximated posterior can (and will) indeed put more probability mass on states with larger utility, something which has apparently been found in grasshoppers. Additionally they show that increasing the cost of spikes results in smaller firing rates which, as they argue, leads to response latencies as seen in experiments. Finally, they show that under the assumption that high utility or very costly states are rare, the model will automatically account for the nonlinear weighting of probabilities in risky choice observed in humans. The model therefore explains this irrational behaviour by noting that “under neural resource constraints, the approximate density will be biased towards high reward regions of the state space.”

I don’t know enough to judge the correspondence of the model behaviour/predictions with the experimental results, or whether the model contradicts some other results. However, the paper is quite inspiring in the sense that it presents an intuitive idea which potentially has big implications for how populations of neurons code probability distributions, namely that the neuronal codes are influenced as much by expected rewards as by the natural distribution. Of course, the paper leaves many questions open. The authors only show results for when the approximate distributions are optimised alone, but what happens when actions and distribution are optimised simultaneously? What are the timescales of the distribution optimisation? Is it really instantanious (on the same timescale as action selection) as the authors indicate, or is it rather a slower process? Their proposal also has the potential to explain how the dimensionality of the state space can be reduced by only considering states which are behaviourally relevant. However, it remains unclear to what extent this specialisation should be implemented. In other words, is the posterior dependent, e.g., on the precise goal in the task, or is it rather only dependent on the selected task? Especially the cost of spiking example suggests a connection between the proposed mechanism and attention. Can attention be explained by this low-level description of biased representations of the posterior distribution?

The paper is quite inspiring and you kind of wonder why nobody has made these ideas explicit before, or maybe somebody has? Actually Maneesh Sahani had a NIPS paper in 2004 which they cite, but not comment on, and which looks very similar to what they do here (from the abstract).