Geometry and 5d N=1 QFTs

Lakshya Bhardwaj
Harvard University
March 30, 2020
I will explain that a geometric theory built upon the theory of complex surfaces can be used to understand wide variety of phenomena in five-dimensional supersymmetric theories, which includes the following: Classification of 5d superconformal field theories (SCFTs)
Enhanced flavor symmetries of 5d SCFTs
5d N=1 gauge theory descriptions of 5d and 6d SCFTs
Dualities between 5d N=1 gauge theories
T-dualities between 6d N=(1,0) little string theories

Fragmentation pseudo-metrics and Lagrangian submanifolds

Octav Cornea
Université de Montréal
March 27, 2020
The purpose of the talk is to discuss a class of pseudo-metrics that can be defined on the set of objects of a triangulated category whose morphisms are endowed with a notion of weight. In case the objects are Lagrangian submanifolds (possibly immersed) there are a some natural ways to define such pseudo-metrics and, if the class of Lagrangian submanifolds is unobstructed, these pseudo-metrics are non-degenerate and extend in a natural way the Hofer distance.
The talk is based on joint work with P. Biran and with E. Shelukhin.

Solving Random Matrix Models with Positivity

Henry Lin
Princeton University
March 27, 2020
Abstract: A new approach to solving random matrix models directly in the large N limit is developed. First, a set of numerical values for some low-pt correlation functions is guessed. The large N loop equations are then used to generate values of higher-pt correlation functions based on this guess. Then one tests whether these higher-pt functions are consistent with positivity requirements, e.g., tr M^{2k} > 0. If not, the guessed values are systematically ruled out.

Margins, perceptrons, and deep networks

Matus Telgarsky
University of Illinois
March 26, 2020
This talk surveys the role of margins in the analysis of deep networks. As a concrete highlight, it sketches a perceptron-based analysis establishing that shallow ReLU networks can achieve small test error even when they are quite narrow, sometimes even logarithmic in the sample size and inverse target error. The analysis and bounds depend on a certain nonlinear margin quantity due to Nitanda and Suzuki, and can lead to tight upper and lower sample complexity bounds.

Joint work with Ziwei Ji.

High dimensional expanders - Part 2

Irit Dinur
Weizmann Institute of Science; Visiting Professor, School of Mathematics
March 24, 2020
In this talk I will describe the notion of "agreement tests" that are motivated by PCPs but stand alone as a combinatorial property-testing question. I will show that high dimensional expanders support agreement tests, thereby derandomizing direct product tests in a very strong way.

How to See Everything in the Entanglement Wedge

Adam Levine
Member, School of Natural Sciences, Institute for Advanced Study
March 20, 2020
Abstract: We will describe work in progress in which we argue that a generalization of the procedure developed by Gao-Jafferis-Wall can allow one to see the entirety of the entanglement wedge. Gao-Jafferis-Wall demonstrated that one can see excitations behind the horizon by deforming the boundary Hamiltonian using a non-local operator. We will argue in a simple class of examples that deforming the boundary Hamiltonian by a specific modular Hamiltonian can allow one to see (almost) everything in the entanglement wedge.

Sharp Thresholds and Extremal Combinatorics

Dor Minzer
Member, Institute for Advanced Study
March 17, 2020
Consider the p-biased distribution over 0,1n, in which each coordinate independently is sampled according to a p-biased bit. A sharp-threshold result studies the behavior of Boolean functions over the hypercube under different p-biased measures, and in particular whether the function experiences a phase transition between two, close p's. While the theory of sharp-thresholds is well understood for p's that are bounded away from 0 and 1, it is much less so for values of p that are close to 0 or 1.

Feature purification: How adversarial training can perform robust deep learning

Yuanzhi Li
Carnegie Mellon University
March 16, 2020
Why deep learning models, trained on many machine learning tasks, can obtain nearly perfect predictions of unseen data sampled from the same distribution but are extremely vulnerable to small perturbations of the input? How can adversarial training improve the robustness of the neural networks over such perturbations? In this work, we developed a new principle called "feature purification''.

Covariant Phase Space with Boundaries

Daniel Harlow
Massachusetts Institute of Technology
March 16, 2020
The Hamiltonian formulation of mechanics has many advantages, but its standard presentation destroys manifest covariance. This can be avoided by using the "covariant phase formalism" of Iyer and Wald, but until recently this formalism has suffered from several ambiguities related to boundary terms and total derivatives. In this talk I will present a new version of the formalism which incorporates boundary effects from the beginning.

Introduction to high dimensional expanders

Irit Dinur
Weizmann Institute of Science; Visiting Professor, School of Mathematics
March 10, 2020
High dimensional expansion generalizes edge and spectral expansion in graphs to hypergraphs (viewed as higher dimensional simplicial complexes). It is a tool that allows analysis of PCP agreement rests, mixing of Markov chains, and construction of new error correcting codes. My talk will be devoted to proving some nice relations between local and global expansion of these objects.