School of Mathematics

Iwasawa theory and Bloch-Kato conjecture for unitary groups

Xin Wan
Morningside Center of Mathematics, Chinese Academy of Sciences
May 21, 2020
We describe a new method to study Eisenstein family and Iwasawa theory on unitary groups over totally real fields of general signatures. As a consequence we prove that if the central L-value of a cuspidal eigenform on the unitary group twisted by a CM character is 0, then the corresponding Selmer group has positive rank. The method also has a byproduct the p-adic functional equations for p-adic L-functions and p-adic families of Eisenstein series on unitary groups.

Forecasting Epidemics and Pandemics

Roni Rosenfeld
Carnegie Mellon University
May 21, 2020
Epidemiological forecasting is critically needed for decision making by national and local governments, public health officials, healthcare institutions and the general public. The Delphi group at Carnegie Mellon University was founded in 2012 to advance the theory and technological capability of epidemiological forecasting, and to promote its role in decision making, both public and private. Our long term vision is to make epidemiological forecasting as useful and universally accepted as weather forecasting is today.

Neural SDEs: Deep Generative Models in the Diffusion Limit

Maxim Raginsky
University of Illinois Urbana-Champaign
May 19, 2020
In deep generative models, the latent variable is generated by a time-inhomogeneous Markov chain, where at each time step we pass the current state through a parametric nonlinear map, such as a feedforward neural net, and add a small independent Gaussian perturbation. In this talk, based on joint work with Belinda Tzen, I will discuss the diffusion limit of such models, where we increase the number of layers while sending the step size and the noise variance to zero.

The Non-Stochastic Control Problem

Elad Hazan
Princeton University
May 18, 2020
Linear dynamical systems are a continuous subclass of reinforcement learning models that are widely used in robotics, finance, engineering, and meteorology. Classical control, since the work of Kalman, has focused on dynamics with Gaussian i.i.d. noise, quadratic loss functions and, in terms of provably efficient algorithms, known systems and observed state. We'll discuss how to apply new machine learning methods which relax all of the above: efficient control with adversarial noise, general loss functions, unknown systems, and partial observation.

Reflections on Cylindrical Contact Homology

Jo Nelson
Rice University
May 15, 2020
This talk beings with a light introduction, including some historical anecdotes to motivate the development of this Floer theoretic machinery for contact manifolds some 25 years ago. I will discuss joint work with Hutchings which constructs nonequivariant and a family Floer equivariant version of contact homology. Both theories are generated by two copies of each Reeb orbit over Z and capture interesting torsion information.

MathZero, The Classification Problem, and Set-Theoretic Type Theory

David McAllester
Toyota Technological Institute at Chicago
May 14, 2020
AlphaZero learns to play go, chess and shogi at a superhuman level through self play given only the rules of the game. This raises the question of whether a similar thing could be done for mathematics --- a MathZero. MathZero would require a formal foundation and an objective. We propose the foundation of set-theoretic dependent type theory and an objective defined in terms of the classification problem --- the problem of classifying concept instances up to isomorphism. Isomorphism is central to the structure of mathematics.

Convex Set Disjointness, Distributed Learning of Halfspaces, and Linear Programming

Shay Moran
Member, School of Mathematics
May 12, 2020
Distributed learning protocols are designed to train on distributed data without gathering it all on a single centralized machine, thus contributing to the efficiency of the system and enhancing its privacy. We study a central problem in distributed learning, called Distributed Learning of Halfspaces: let U \subset R^d be a known domain of size n and let h:R^d —> R be an unknown target affine function. A set of examples {(u,b)} is distributed between several parties, where u \in U is a point and b = sign(h(u)) \in {-1, +1} is its label.

Quantitative decompositions of Lipschitz mappings

Guy C. David
Ball State University
May 12, 2020
Given a Lipschitz map, it is often useful to chop the domain into pieces on which the map has simple behavior. For example, depending on the dimensions of source and target, one may ask for pieces on which the map behaves like a bi-Lipschitz embedding or like a linear projection. For many issues, it is even more useful if this decomposition is quantitative, i.e., with bounds independent of the particular map or spaces involved.