The Simplicity Conjecture

Dan Cristofaro-Gardiner
Member, School of Mathematics
April 3, 2020
I will explain recent joint work proving that the group of compactly supported area preserving homeomorphisms of the two-disc is not a simple group; this answers the ”Simplicity Conjecture” in the affirmative. Our proof uses new spectral invariants, defined via periodic Floer homology, that I will introduce: these recover the Calabi invariant of monotone twists.

Density conjecture for horizontal families of lattices in SL(2)

Mikolaj Fraczyk
Member, School of Mathematics
April 2, 2020
Let G be a real semi-simple Lie group with an irreducible unitary representation \pi. The non-temperedness of \pi is measured by the parameter p(\pi) which is defined as the infimum of p\geq 2 such that \pi has matrix coefficients in L^p(G). Sarnak and Xue conjectured that for any arithmetic lattice \Gamma \subset G and principal congruence subgroup \Gamma(q)\subset \Gamma, the multiplicity of \pi in L^2(G/\Gamma(q)) is at most O(V(q)^{2/p(\pi)+\epsilon}) where V(q) is the covolume of \Gamma(q).

Learning Controllable Representations

Richard Zemel
University of Toronto; Member, School of Mathematics
April 2, 2020
As deep learning systems become more prevalent in real-world applications it is essential to allow users to exert more control over the system. Exerting some structure over the learned representations enables users to manipulate, interpret, and even obfuscate the representations, and may also improve out-of-distribution generalization. In this talk I will discuss recent work that makes some steps towards these goals, aiming to represent the input in a factorized form, with dimensions of the latent space partitioned into task-dependent and task-independent components.

Some Recent Insights on Transfer Learning

Samory Kpotufe
Columbia University; Member, School of Mathematics
March 31, 2020
A common situation in Machine Learning is one where training data is not fully representative of a target population due to bias in the sampling mechanism or high costs in sampling the target population; in such situations, we aim to ’transfer’ relevant information from the training data (a.k.a. source data) to the target application. How much information is in the source data? How much target data should we collect if any? These are all practical questions that depend crucially on 'how far' the source domain is from the target.

CSPs with Global Modular Constraints: Algorithms and Hardness via Polynomial Representations

Sivakanth Gopi
Microsoft Researcher
March 30, 2020
A theorist's dream is to show that hard instances/obstructions for an (optimal) algorithm can be used as gadgets to prove tight hardness reductions (which proves optimality of the algorithm). An example of such a result is that of Prasad Raghavendra who showed that for any constraint satisfaction problem (CSP), there is an SDP which achieves the best possible approximation factor assuming UGC. We show that a similar phenomenon occurs in CSPs with global modular constraints.

Geometry and 5d N=1 QFTs

Lakshya Bhardwaj
Harvard University
March 30, 2020
I will explain that a geometric theory built upon the theory of complex surfaces can be used to understand wide variety of phenomena in five-dimensional supersymmetric theories, which includes the following: Classification of 5d superconformal field theories (SCFTs)
Enhanced flavor symmetries of 5d SCFTs
5d N=1 gauge theory descriptions of 5d and 6d SCFTs
Dualities between 5d N=1 gauge theories
T-dualities between 6d N=(1,0) little string theories

Fragmentation pseudo-metrics and Lagrangian submanifolds

Octav Cornea
Université de Montréal
March 27, 2020
The purpose of the talk is to discuss a class of pseudo-metrics that can be defined on the set of objects of a triangulated category whose morphisms are endowed with a notion of weight. In case the objects are Lagrangian submanifolds (possibly immersed) there are a some natural ways to define such pseudo-metrics and, if the class of Lagrangian submanifolds is unobstructed, these pseudo-metrics are non-degenerate and extend in a natural way the Hofer distance.
The talk is based on joint work with P. Biran and with E. Shelukhin.

Solving Random Matrix Models with Positivity

Henry Lin
Princeton University
March 27, 2020
Abstract: A new approach to solving random matrix models directly in the large N limit is developed. First, a set of numerical values for some low-pt correlation functions is guessed. The large N loop equations are then used to generate values of higher-pt correlation functions based on this guess. Then one tests whether these higher-pt functions are consistent with positivity requirements, e.g., tr M^{2k} > 0. If not, the guessed values are systematically ruled out.

Margins, perceptrons, and deep networks

Matus Telgarsky
University of Illinois
March 26, 2020
This talk surveys the role of margins in the analysis of deep networks. As a concrete highlight, it sketches a perceptron-based analysis establishing that shallow ReLU networks can achieve small test error even when they are quite narrow, sometimes even logarithmic in the sample size and inverse target error. The analysis and bounds depend on a certain nonlinear margin quantity due to Nitanda and Suzuki, and can lead to tight upper and lower sample complexity bounds.

Joint work with Ziwei Ji.