The Geography of Immersed Lagrangian Fillings of Legendrian Submanifolds

Lisa Traynor
April 24, 2020
Given a smooth knot K in the 3-sphere, a classic question in knot theory is: What surfaces in the 4-ball have boundary equal to K? One can also consider immersed surfaces and ask a “geography” question: What combinations of genus and double points can be realized by surfaces with boundary equal to K? I will discuss symplectic analogues of these questions: Given a Legendrian knot, what Lagrangian surfaces can it bound? What immersed Lagrangian surfaces can it bound?

Deep Generative models and Inverse Problems

Alexandros Dimakis
University of Texas at Austin
April 23, 2020
Modern deep generative models like GANs, VAEs and invertible flows are showing amazing results on modeling high-dimensional distributions, especially for images. We will show how they can be used to solve inverse problems by generalizing compressed sensing beyond sparsity. We will present the general framework, new results and open problems in this space.

Geodesically Convex Optimization (or, can we prove P!=NP using gradient descent)

Avi Wigderson
Herbert H. Maass Professor, School of Mathematics
April 21, 2020
This talk aims to summarize a project I was involved in during the past 5 years, with the hope of explaining our most complete understanding so far, as well as challenges and open problems. The main messages of this project are summarized below; I plan to describe, through examples, many of the concepts they refer to, and the evolution of ideas leading to them. No special background is assumed.

A variational approach to the regularity theory for the Monge-Ampère equation

Felix Otto
Max Planck Institute Leipzig
April 20, 2020
We present a purely variational approach to the regularity theory for the Monge-Ampère equation, or rather optimal transportation, introduced with M. Goldman. Following De Giorgi’s philosophy for the regularity theory of minimal surfaces, it is based on the approximation of the displacement by a harmonic gradient, which leads to a One-Step Improvement Lemma, and feeds into a Campanato iteration on the C1,α-level for the displacement, capitalizing on affine invariance.

Equivariant quantum operations and relations between them

Nicholas Wilkins
University of Bristol
April 17, 2020
There is growing interest in looking at operations on quantum cohomology that take into account symmetries in the holomorphic spheres (such as the quantum Steenrod powers, using a Z/p-symmetry). In order to prove relations between them, one needs to generalise this to include equivariant operations with more marked points, varying domains and different symmetry groups. We will look at the general method of construction of these operations, as well as two distinct examples of relations between them.

A Tutorial on Entanglement Island Computations

Raghu Mahajan
Member, School of Natural Sciences, Institute for Advanced Study
April 17, 2020
In this talk we will present details of quantum extremal surface computations in a simple setup, demonstrating the role of entanglement islands in resolving entropy paradoxes in gravity. The setup involves eternal AdS2 black holes in thermal equilibrium with auxiliary bath systems. We will also describe the extension of this setup to higher dimensions using Randall-Sundrum branes.

Modularity, Attention and Credit Assignment: Efficient information dispatching in neural computations

Anirudh Goyal
April 16, 2020
Physical processes in the world often have a modular structure, with complexity emerging through combinations of simpler subsystems. Machine learning seeks to uncover and use regularities in the physical world. Although these regularities manifest themselves as statistical dependencies, they are ultimately due to dynamic processes governed by physics. These processes are often independent and only interact sparsely..Despite this, most machine learning models employ the opposite inductive bias, i.e., that all processes interact.

Tradeoffs between Robustness and Accuracy

Percy Liang
April 16, 2020
Standard machine learning produces models that are highly accurate on average but that degrade dramatically when the test distribution deviates from the training distribution. While one can train robust models, this often comes at the expense of standard accuracy (on the training distribution). We study this tradeoff in two settings, adversarial examples and minority groups, creating simple examples which highlight generalization issues as a major source of this tradeoff.