Universality in numerical computations with random data.

Deift, P. A., Menon, G., Olver, S., and Trogdon, T.
Proc Natl Acad Sci U S A, 111:14973–14978, 2014
DOI, Google Scholar


The authors present evidence for universality in numerical computations with random data. Given a (possibly stochastic) numerical algorithm with random input data, the time (or number of iterations) to convergence (within a given tolerance) is a random variable, called the halting time. Two-component universality is observed for the fluctuations of the halting time-i.e., the histogram for the halting times, centered by the sample average and scaled by the sample variance, collapses to a universal curve, independent of the input data distribution, as the dimension increases. Thus, up to two components-the sample average and the sample variance-the statistics for the halting time are universally prescribed. The case studies include six standard numerical algorithms as well as a model of neural computation and decision-making. A link to relevant software is provided for readers who would like to do computations of their own.


The author’s show that normalised halting / stopping times follow common distributions. Stopping times are assumed to be generated by an algorithm A from a random ensemble E where E does not represent the particular sample from which stopping times are generated, but the theoretical distribution of that sample. Normalisation is standard normalisation: subtract mean and divide by standard deviation of a sample of stopping times. The resulting distribution is the same across different ensembles E, but differs across algorithms A. That distributions are the same the authors call (two-component) universality without explanation why they call it like that. There is also no reference to a concept of universality. Perhaps it’s something common in physics. Perhaps it’s explained in their first reference. Reference numbers are shifted by one, by the way.

How is that interesting? I’m not sure. The authors give an example with a model of reaction times. This is a kind of Ising model where decisions are made once a sufficient number of binary states have switched to one of the states. States flip with a certain probability as determined by a given function of the current state of the whole Ising model. When different such functions were considered, corresponding to different ensembles E, normalised reaction times followed the same distribution again. However, the distribution of normalised reaction times differed for different total numbers of binary states in the Ising model. These results suggest that normalised reaction times should follow the same distribution over subjects, but only if subjects differ maximally by the randomness on which their decisions are based. If subjects use slightly different algorithms for making decisions, you would expect differences in the distribution of normalised reaction times. I guess it would be cool to infer that subjects use the same (or a different) algorithm purely from their reaction time distributions, but what would be an appropriate test for this and what would be its power?

Leave a Reply

Your email address will not be published. Required fields are marked *