Evaluating Lossy Compression Rates of Deep Generative Models

Roger Grosse
April 15, 2020
Implicit generative models such as GANs have achieved remarkable progress at generating convincing fake images, but how well do they really match the distribution? Log-likelihood has been used extensively to evaluate generative models whenever it’s convenient to do so, but measuring log-likelihoods for implicit generative models presents computational challenges. Furthermore, in order to obtain a density, one needs to smooth the distribution using a noisy model (typically Gaussian), and this choice is hard to motivate.

Towards Robust Artificial Intelligence

Pushmeet Kohli
April 15, 2020
Deep learning has led to rapid progress being made in the field of machine learning and artificial intelligence, leading to dramatically improved solutions of many challenging problems such as image understanding, speech recognition, and control systems. Despite these remarkable successes, researchers have observed some intriguing and troubling aspects of the behaviour of these models. A case in point is the presence of adversarial examples which make learning based systems fail in unexpected ways.

Legal Theorems of Privacy

Kobbi Nissim
Georgetown University
April 13, 2020
There are significant gaps between legal and technical thinking around data privacy. Technical standards such as k-anonymity and differential privacy are described using mathematical language whereas legal standards are not rigorous from a mathematical point of view and often resort to concepts such as de-identification and anonymization which they only partially define. As a result, arguments about the adequacy of technical privacy measures for satisfying legal privacy often lack rigor, and their conclusions are uncertain.

A New Topological Symmetry of Asymptotically Flat Spacetimes

Uri Kol
New York University
April 13, 2020
Abstract: I will show that the isometry group of asymptotically flat spacetimes contains, in addition to the BMS group, a new dual supertranslation symmetry. The corresponding new conserved charges are akin to the large magnetic U(1) charges in QED. They factorize the Hilbert space of asymptotic states into distinct super-selection sectors and reveal a rich topological structure exhibited by the asymptotic metric.

Meta-Learning: Why It’s Hard and What We Can Do

Ke Li
Member, School of Mathematics
April 9, 2020
Meta-learning (or learning to learn) studies how to use machine learning to design machine learning methods themselves. We consider an optimization-based formulation of meta-learning that learns to design an optimization algorithm automatically, which we call Learning to Optimize. Surprisingly, it turns out that the most straightforward approach of learning such an algorithm, namely backpropagation, does not work. We explore the underlying reason for this failure, devise a solution based on reinforcement learning and discuss the open challenges in meta-learning.

On the Kudla-Rapoport conjecture

Chao Li
Columbia University
April 9, 2020
The Kudla-Rapoport conjecture predicts a precise identity between the arithmetic intersection number of special cycles on unitary Rapoport-Zink spaces and the derivative of local representation densities of hermitian forms. It is a key local ingredient to establish the arithmetic Siegel-Weil formula and the arithmetic Rallis inner product formula, relating the height of special cycles on Shimura varieties to the derivative of Siegel Eisenstein series and L-functions. We will motivate this conjecture, explain a proof and discuss global applications.

Primality testing

Andrey Kupavskii
Member, School of Mathematics
April 7, 2020
In the talk, I will explain the algorithm (and its analysis) for testing whether a number is a prime, invented by Agrawal, Kayal, and Saxena.

Interpolation in learning: steps towards understanding when overparameterization is harmless, when it helps, and when it causes harm

Anant Sahai
University of California, Berkeley
April 7, 2020
A continuing mystery in understanding the empirical success of deep neural networks has been in their ability to achieve zero training error and yet generalize well, even when the training data is noisy and there are many more parameters than data points. Following the information-theoretic tradition of seeking understanding, this talk will share our four-part approach to shedding some light on this phenomenon.