School of Mathematics

Efficient Robot Skill Learning via Grounded Simulation Learning, Imitation Learning from Observation, and Off-Policy Reinforcement Learning

Peter Stone
University of Texas at Austin
July 30, 2020
For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot.

Generalized Energy-Based Models

Arthur Gretton
University College London
July 28, 2020
I will introduce Generalized Energy Based Models (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the "generator").

Pontryagin - Thom for orbifold bordism

John Pardon
Princeton University
July 24, 2020
The classical Pontryagin–Thom isomorphism equates manifold bordism groups with corresponding stable homotopy groups. This construction moreover generalizes to the equivariant context. I will discuss work which establishes a Pontryagin--Thom isomorphism for orbispaces (an orbispace is a "space" which is locally modelled on Y/G for Y a space and G a finite group; examples of orbispaces include orbifolds and moduli spaces of pseudo-holomorphic curves). This involves defining a category of orbispectra and an involution of this category extending Spanier--Whitehead duality.

Priors for Semantic Variables

Yoshua Bengio
Université de Montréal
July 23, 2020
Some of the aspects of the world around us are captured in natural language and refer to semantic high-level variables, which often have a causal role (referring to agents, objects, and actions or intentions). These high-level variables also seem to satisfy very peculiar characteristics which low-level data (like images or sounds) do not share, and it would be good to clarify these characteristics in the form of priors which can guide the design of machine learning systems benefitting from these assumptions.

Graph Nets: The Next Generation

Max Welling
university of Amsterdam
July 21, 2020
In this talk I will introduce our next generation of graph neural networks. GNNs have the property that they are invariant to permutations of the nodes in the graph and to rotations of the graph as a whole. We claim this is unnecessarily restrictive and in this talk we will explore extensions of these GNNs to more flexible equivariant constructions. In particular, Natural Graph Networks for general graphs are globally equivariant under permutations of the nodes but can still be executed through local message passing protocols.

Knot Floer homology and bordered algebras

Peter Ozsváth
Princeton University
July 10, 2020
Knot Floer homology is an invariant for knots in three-space, defined as a Lagrangian Floer homology in a symmetric product. It has the form of a bigraded vector space, encoding topological information about the knot. I will discuss an algebraic approach to computing knot Floer homology, and a corresponding version for links, based on decomposing knot diagrams.

This is joint work with Zoltan Szabo, building on earlier joint work (bordered Heegaard Floer homology) with Robert Lipshitz and Dylan Thurston.

Role of Interaction in Competitive Optimization

Anima Anandkumar
California Institute of Technology
July 9, 2020
Competitive optimization is needed for many ML problems such as training GANs, robust reinforcement learning, and adversarial learning. Standard approaches to competitive optimization involve each agent independently optimizing their objective functions using SGD or other gradient-based approaches. However, they suffer from oscillations and instability, since the optimization does not account for interaction among the players. We introduce competitive gradient descent (CGD) that explicitly incorporates interaction by solving for Nash equilibrium of a local game.

Machine learning-based design (of proteins, small molecules and beyond)

Jennifer Listgarten
University of California, Berkeley
July 7, 2020
Data-driven design is making headway into a number of application areas, including protein, small-molecule, and materials engineering. The design goal is to construct an object with desired properties, such as a protein that binds to a target more tightly than previously observed. To that end, costly experimental measurements are being replaced with calls to a high-capacity regression model trained on labeled data, which can be leveraged in an in silico search for promising design candidates.

Infinite staircases and reflexive polygons

Ana Rita Pires
University of Edinburgh
July 3, 2020
A classic result, due to McDuff and Schlenk, asserts that the function that encodes when a four-dimensional symplectic ellipsoid can be embedded into a four-dimensional ball has a remarkable structure: the function has infinitely many corners, determined by the odd-index Fibonacci numbers, that fit together to form an infinite staircase. The work of McDuff and Schlenk has recently led to considerable interest in understanding when the ellipsoid embedding function for other symplectic 4-manifolds is partly described by an infinite staircase.