School of Mathematics

Locally symmetric spaces: $p$-adic aspects

Laurent Fargues
Institut de Mathématiques de Jussieu
November 30, 2017
$p$-adic period spaces have been introduced by Rapoport and Zink as a generalization of Drinfeld upper half spaces and Lubin-Tate spaces. Those are open subsets of a rigid analytic $p$-adic flag manifold. An approximation of this open subset is the so called weakly admissible locus obtained by removing a profinite set of closed Schubert varieties. I will explain a recent theorem characterizing when the period space coincides with the weakly admissible locus. The proof consists in a thorough study of modifications of G-bundles on the curve.

Nonuniqueness of weak solutions to the Navier-Stokes equation

Tristan Buckmaster
Princeton University
November 29, 2017
For initial datum of finite kinetic energy Leray has proven in 1934 that there exists at least one global in time finite energy weak solution of the 3D Navier-Stokes equations. In this talk, I will discuss very recent joint work with Vlad Vicol in which we prove that weak solutions of the 3D Navier-Stokes equations are not unique in the class of weak solutions with finite kinetic energy.

Lattices: from geometry to cryptography

Oded Regev
New York University
November 29, 2017
Lattices are periodic arrangements of points in space that have attracted the attention of mathematicians for over two centuries. They have recently become an object of even greater interest due to their remarkable applications in cryptography. In this talk we will survey some of this progress and describe the somewhat mysterious role that quantum computing plays in the area.

Automorphic forms and motivic cohomology III

Akshay Venkatesh
Stanford University; Distinguished Visiting Professor, School of Mathematics
November 28, 2017

In the lectures I will formulate a conjecture asserting that there is a hidden action of certain motivic cohomology groups on the cohomology of arithmetic groups. One can construct this action, tensored with $\mathbb C$, using differential forms. Also one can construct it, tensored with $\mathbb Q_p$, by using a derived version of the Hecke algebra (or a derived version of the Galois deformation rings).

Shimura curves and new abc bounds

Hector Pasten
Harvard University
November 28, 2017
Existing unconditional progress on the abc conjecture and Szpiro's conjecture is rather limited and coming from essentially only two approaches: The theory of linear forms in $p$-adic logarithms, and bounds for the degree of modular parametrizations of elliptic curves by using congruences of modular forms. In this talk I will discuss a new approach as well as some unconditional results that it yields.

Everything you wanted to know about machine learning but didn't know whom to ask

Sanjeev Arora
Princeton University; Visiting Professor, School of Mathematics
November 27, 2017

This talk is going to be an extended and more technical version of my brief public lecture https://www.ias.edu/ideas/2017/arora-zemel-machine-learning

I will present some of the basic ideas of machine learning, focusing on the mathematical formulations. Then I will take audience questions.

Open Gopakumar-Vafa conjecture for rational elliptic surfaces

Yu-Shen Lin
Harvard University
November 27, 2017
We will explain a definition of open Gromov-Witten invariants on the rational elliptic surfaces and explain the connection of the invariants with tropical geometry. For certain rational elliptic surfaces coming from meromorphic Hitchin system, we will show that the open Gromov-Witten invariants with boundary conditions near infinity (up to some transformation) coincide with the closed geodesic counting invariants defined by Gaiotto-Moore-Neitzke, which are integer-valued.

A practical guide to deep learning

Richard Zemel
University of Toronto; Visitor, School of Mathematics
November 21, 2017
Neural networks have been around for many decades. An important question is what has led to their recent surge in performance and popularity. I will start with an introduction to deep neural networks, covering the terminology and standard approaches to constructing networks. I will focus on the two primary, very successful forms of networks: deep convolutional nets, as originally developed for vision problems; and recurrent networks, for speech and language tasks.