Recently Added

Knot Floer homology and bordered algebras

Peter Ozsváth
Princeton University
July 10, 2020
Knot Floer homology is an invariant for knots in three-space, defined as a Lagrangian Floer homology in a symmetric product. It has the form of a bigraded vector space, encoding topological information about the knot. I will discuss an algebraic approach to computing knot Floer homology, and a corresponding version for links, based on decomposing knot diagrams.

This is joint work with Zoltan Szabo, building on earlier joint work (bordered Heegaard Floer homology) with Robert Lipshitz and Dylan Thurston.

Role of Interaction in Competitive Optimization

Anima Anandkumar
California Institute of Technology
July 9, 2020
Competitive optimization is needed for many ML problems such as training GANs, robust reinforcement learning, and adversarial learning. Standard approaches to competitive optimization involve each agent independently optimizing their objective functions using SGD or other gradient-based approaches. However, they suffer from oscillations and instability, since the optimization does not account for interaction among the players. We introduce competitive gradient descent (CGD) that explicitly incorporates interaction by solving for Nash equilibrium of a local game.

Machine learning-based design (of proteins, small molecules and beyond)

Jennifer Listgarten
University of California, Berkeley
July 7, 2020
Data-driven design is making headway into a number of application areas, including protein, small-molecule, and materials engineering. The design goal is to construct an object with desired properties, such as a protein that binds to a target more tightly than previously observed. To that end, costly experimental measurements are being replaced with calls to a high-capacity regression model trained on labeled data, which can be leveraged in an in silico search for promising design candidates.

Infinite staircases and reflexive polygons

Ana Rita Pires
University of Edinburgh
July 3, 2020
A classic result, due to McDuff and Schlenk, asserts that the function that encodes when a four-dimensional symplectic ellipsoid can be embedded into a four-dimensional ball has a remarkable structure: the function has infinitely many corners, determined by the odd-index Fibonacci numbers, that fit together to form an infinite staircase. The work of McDuff and Schlenk has recently led to considerable interest in understanding when the ellipsoid embedding function for other symplectic 4-manifolds is partly described by an infinite staircase.

Distinguishing monotone Lagrangians via holomorphic annuli

Ailsa Keating
University of Cambridge
June 26, 2020
We present techniques for constructing families of compact, monotone (including exact) Lagrangians in certain affine varieties, starting with Brieskorn-Pham hypersurfaces. We will focus on dimensions 2 and 3. In particular, we'll explain how to set up well-defined counts of holomorphic annuli for a range of these families. Time allowing, we will give a number of applications.

Instance-Hiding Schemes for Private Distributed Learning

Sanjeev Arora
Princeton University; Distinguishing Visiting Professor, School of Mathematics
June 25, 2020
An important problem today is how to allow multiple distributed entities to train a shared neural network on their private data while protecting data privacy. Federated learning is a standard framework for distributed deep learning Federated Learning, and one would like to assure full privacy in that framework . The proposed methods, such as homomorphic encryption and differential privacy, come with drawbacks such as large computational overhead or large drop in accuracy.

Generalizable Adversarial Robustness to Unforeseen Attacks

Soheil Feizi
University of Maryland
June 23, 2020
In the last couple of years, a lot of progress has been made to enhance robustness of models against adversarial attacks. However, two major shortcomings still remain: (i) practical defenses are often vulnerable against strong “adaptive” attack algorithms, and (ii) current defenses have poor generalization to “unforeseen” attack threat models (the ones not used in training).

On learning in the presence of biased data and strategic behavior

Avrim Blum
Toyota Technological Institute at Chicago
June 16, 2020
In this talk I will discuss two lines of work involving learning in the presence of biased data and strategic behavior. In the first, we ask whether fairness constraints on learning algorithms can actually improve the accuracy of the classifier produced, when training data is unrepresentative or corrupted due to bias. Typically, fairness constraints are analyzed as a tradeoff with classical objectives such as accuracy. Our results here show there are natural scenarios where they can be a win-win, helping to improve overall accuracy.

The challenges of model-based reinforcement learning and how to overcome them

Csaba Szepesvári
University of Alberta
June 18, 2020
Some believe that truly effective and efficient reinforcement learning algorithms must explicitly construct and explicitly reason with models that capture the causal structure of the world. In short, model-based reinforcement learning is not optional. As this is not a new belief, it may be surprising that empirically, at least as far as the current state of art is concerned, the majority of the top performing algorithms are model-free.

Independence of ℓ for Frobenius conjugacy classes attached to abelian varieties

Rong Zhou
Imperial College London
June 18, 2020
Let A be an abelian variety over a number field E⊂ℂ and let v be a place of good reduction lying over a prime p. For a prime ℓ≠p, a result of Deligne implies that upon replacing E by a finite extension, the Galois representation on the ℓ-adic Tate module of A factors as ρℓ:Gal(E⎯⎯⎯⎯/E)→GA, where GA is the Mumford--Tate group of Aℂ. For p>2, we prove that the conjugacy class of ρℓ(Frobv) is defined over ℚ and independent of ℓ. This is joint work with Mark Kisin.