Distinguishing monotone Lagrangians via holomorphic annuli

Ailsa Keating
University of Cambridge
June 26, 2020
We present techniques for constructing families of compact, monotone (including exact) Lagrangians in certain affine varieties, starting with Brieskorn-Pham hypersurfaces. We will focus on dimensions 2 and 3. In particular, we'll explain how to set up well-defined counts of holomorphic annuli for a range of these families. Time allowing, we will give a number of applications.

Instance-Hiding Schemes for Private Distributed Learning

Sanjeev Arora
Princeton University; Distinguishing Visiting Professor, School of Mathematics
June 25, 2020
An important problem today is how to allow multiple distributed entities to train a shared neural network on their private data while protecting data privacy. Federated learning is a standard framework for distributed deep learning Federated Learning, and one would like to assure full privacy in that framework . The proposed methods, such as homomorphic encryption and differential privacy, come with drawbacks such as large computational overhead or large drop in accuracy.

Generalizable Adversarial Robustness to Unforeseen Attacks

Soheil Feizi
University of Maryland
June 23, 2020
In the last couple of years, a lot of progress has been made to enhance robustness of models against adversarial attacks. However, two major shortcomings still remain: (i) practical defenses are often vulnerable against strong “adaptive” attack algorithms, and (ii) current defenses have poor generalization to “unforeseen” attack threat models (the ones not used in training).

Independence of ℓ for Frobenius conjugacy classes attached to abelian varieties

Rong Zhou
Imperial College London
June 18, 2020
Let A be an abelian variety over a number field E⊂ℂ and let v be a place of good reduction lying over a prime p. For a prime ℓ≠p, a result of Deligne implies that upon replacing E by a finite extension, the Galois representation on the ℓ-adic Tate module of A factors as ρℓ:Gal(E⎯⎯⎯⎯/E)→GA, where GA is the Mumford--Tate group of Aℂ. For p>2, we prove that the conjugacy class of ρℓ(Frobv) is defined over ℚ and independent of ℓ. This is joint work with Mark Kisin.

The challenges of model-based reinforcement learning and how to overcome them

Csaba Szepesvári
University of Alberta
June 18, 2020
Some believe that truly effective and efficient reinforcement learning algorithms must explicitly construct and explicitly reason with models that capture the causal structure of the world. In short, model-based reinforcement learning is not optional. As this is not a new belief, it may be surprising that empirically, at least as far as the current state of art is concerned, the majority of the top performing algorithms are model-free.

On learning in the presence of biased data and strategic behavior

Avrim Blum
Toyota Technological Institute at Chicago
June 16, 2020
In this talk I will discuss two lines of work involving learning in the presence of biased data and strategic behavior. In the first, we ask whether fairness constraints on learning algorithms can actually improve the accuracy of the classifier produced, when training data is unrepresentative or corrupted due to bias. Typically, fairness constraints are analyzed as a tradeoff with classical objectives such as accuracy. Our results here show there are natural scenarios where they can be a win-win, helping to improve overall accuracy.

Floer Cohomology and Arc Spaces

Mark McLean
Stony Brook University
June 12, 2020
Let f be a polynomial over the complex numbers with an isolated singular point at the origin and let d be a positive integer. To such a polynomial we can assign a variety called the dth contact locus of f. Morally, this corresponds to the space of d-jets of holomorphic disks in complex affine space whose boundary `wraps' around the singularity d times. We show that Floer cohomology of the dth power of the Milnor monodromy map is isomorphic to compactly supported cohomology of the dth contact locus.

On Langevin Dynamics in Machine Learning

Michael I. Jordan
University of California, Berkeley
June 11, 2020
Langevin diffusions are continuous-time stochastic processes that are based on the gradient of a potential function. As such they have many connections---some known and many still to be explored---to gradient-based machine learning. I'll discuss several recent results in this vein: (1) the use of Langevin-based algorithms in bandit problems; (2) the acceleration of Langevin diffusions; (3) how to use Langevin Monte Carlo without making smoothness assumptions.

New constraints on the Galois configurations of algebraic integers in the complex plane

Vesselin Dimitrov
University of Toronto
June 11, 2020
Fekete (1923) discovered the notion of transfinite diameter while studying the possible configurations of Galois orbits of algebraic integers in the complex plane. Based purely on the fact that the discriminants of monic integer irreducible polynomials P(X)∈ℤ[X] are at least 1 in magnitude (since they are non-zero integers), he found that the incidences (,P) between these polynomials P(X) and compacts ⊂ℂ of transfinite diameter d()

What Do Our Models Learn?

Aleksander Mądry
Massachusetts Institute of Technology
June 9, 2020
Large-scale vision benchmarks have driven---and often even defined---progress in machine learning. However, these benchmarks are merely proxies for the real-world tasks we actually care about. How well do our benchmarks capture such tasks?