School of Mathematics
I will introduce l-adic representations and what it means for them to be automorphic, talk about potential automorphy as an alternative to automorphy, explain what can currently be proved (but not how) and discuss what seem to me the important open problems. This should serve as an introduction to half the special year for non-number theorists. The other major theme will likely be the `p-adic Langlands program', which I will not address (but perhaps someone else will).
I will describe the proof of the following surprising result: the typical billiard paths form the family of the most uniformly distributed curves in the unit square. I will justify this vague claim with a precise statement. As a byproduct, we obtain the counter-intuitive fact that the complexity of the test set is almost irrelevant. The error term is shockingly small, and it does not matter that we test uniformity with a nice set (like a circle or a square), or with an arbitrarily ugly Lebesgue measurable subset of the unit square.
The condition number of a matrix is at the heart of numerical linear algebra. In the 1940s von-Neumann and Goldstine, motivated by the problem of inverting, posed the following question:
(1) What is the condition number of a random matrix ?
During the years, this question was raised again and again, by various researchers (Smale, Demmel etc). About ten years ago, motivated by "Smoothed Analysis", Spielman and Teng raised a more general question:
(2) What is the condition number of a randomly perturbed matrix ?
This lecture was part of the Institute for Advanced Study’s celebration of its eightieth anniversary, and took place during the events related to the Schools of Mathematics and Natural Sciences.
In this talk I will insult your intelligence by showing a non-original proof of the Central Limit Theorem, with not-particularly-good error bounds. However, the proof is very simple and flexible, allowing generalizations to multidimensional and higher-degree invariance principles. Time permitting, I will also discuss applications to areas of theoretical computer science: property testing, derandomization, learning, and inapproximability.