The Geography of Immersed Lagrangian Fillings of Legendrian Submanifolds

Lisa Traynor
April 24, 2020
Given a smooth knot K in the 3-sphere, a classic question in knot theory is: What surfaces in the 4-ball have boundary equal to K? One can also consider immersed surfaces and ask a “geography” question: What combinations of genus and double points can be realized by surfaces with boundary equal to K? I will discuss symplectic analogues of these questions: Given a Legendrian knot, what Lagrangian surfaces can it bound? What immersed Lagrangian surfaces can it bound?

Deep Generative models and Inverse Problems

Alexandros Dimakis
University of Texas at Austin
April 23, 2020
Modern deep generative models like GANs, VAEs and invertible flows are showing amazing results on modeling high-dimensional distributions, especially for images. We will show how they can be used to solve inverse problems by generalizing compressed sensing beyond sparsity. We will present the general framework, new results and open problems in this space.

Geodesically Convex Optimization (or, can we prove P!=NP using gradient descent)

Avi Wigderson
Herbert H. Maass Professor, School of Mathematics
April 21, 2020
This talk aims to summarize a project I was involved in during the past 5 years, with the hope of explaining our most complete understanding so far, as well as challenges and open problems. The main messages of this project are summarized below; I plan to describe, through examples, many of the concepts they refer to, and the evolution of ideas leading to them. No special background is assumed.

A variational approach to the regularity theory for the Monge-Ampère equation

Felix Otto
Max Planck Institute Leipzig
April 20, 2020
We present a purely variational approach to the regularity theory for the Monge-Ampère equation, or rather optimal transportation, introduced with M. Goldman. Following De Giorgi’s philosophy for the regularity theory of minimal surfaces, it is based on the approximation of the displacement by a harmonic gradient, which leads to a One-Step Improvement Lemma, and feeds into a Campanato iteration on the C1,α-level for the displacement, capitalizing on affine invariance.

Equivariant quantum operations and relations between them

Nicholas Wilkins
University of Bristol
April 17, 2020
There is growing interest in looking at operations on quantum cohomology that take into account symmetries in the holomorphic spheres (such as the quantum Steenrod powers, using a Z/p-symmetry). In order to prove relations between them, one needs to generalise this to include equivariant operations with more marked points, varying domains and different symmetry groups. We will look at the general method of construction of these operations, as well as two distinct examples of relations between them.

A Tutorial on Entanglement Island Computations

Raghu Mahajan
Member, School of Natural Sciences, Institute for Advanced Study
April 17, 2020
In this talk we will present details of quantum extremal surface computations in a simple setup, demonstrating the role of entanglement islands in resolving entropy paradoxes in gravity. The setup involves eternal AdS2 black holes in thermal equilibrium with auxiliary bath systems. We will also describe the extension of this setup to higher dimensions using Randall-Sundrum branes.

Deep equilibrium models via monotone operators

Zico Kolter
April 16, 2020
In this talk, I will first introduce our recent work on the Deep Equilibrium Model (DEQ). Instead of stacking nonlinear layers, as is common in deep learning, this approach finds the equilibrium point of the repeated iteration of a single non-linear layer, then backpropagates through the layer directly using the implicit function theorem. The resulting method achieves or matches state of the art performance in many domains (while consuming much less memory), and can theoretically express any "traditional" deep network with just a single layer.

The Peculiar Optimization and Regularization Challenges in Multi-Task Learning and Meta-Learning

Chelsea Finn
April 16, 2020
Despite the success of deep learning, much of its success has existed in settings where the goal is to learn one, single-purpose function from data. However, in many contexts, we hope to optimize neural networks for multiple, distinct tasks (i.e. multi-task learning), and optimize so that what is learned from these tasks is transferable to the acquisition of new tasks (e.g. as in meta-learning).