Statistical Learning Theory for Modern Machine Learning

John Shawe-Taylor
University College London
August 11, 2020
Probably Approximately Correct (PAC) learning has attempted to analyse the generalisation of learning systems within the statistical learning framework. It has been referred to as a ‘worst case’ analysis, but the tools have been extended to analyse cases where benign distributions mean we can still generalise even if worst case bounds suggest we cannot. The talk will cover the PAC-Bayes approach to analysing generalisation that is inspired by Bayesian inference, but leads to a different role for the prior and posterior distributions.

A Blueprint of Standardized and Composable Machine Learning

Eric Xing
Carnegie Mellon University
August 6, 2020
In handling wide range of experiences ranging from data instances, knowledge, constraints, to rewards, adversaries, and lifelong interplay in an ever-growing spectrum of tasks, contemporary ML/AI research has resulted in thousands of models, learning paradigms, optimization algorithms, not mentioning countless approximation heuristics, tuning tricks, and black-box oracles, plus combinations of all above.

Nonlinear Independent Component Analysis

Aapo Hyvärinen
University of Helsinki
August 4, 2020
Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, e.g. with independent component analysis (ICA) and sparse coding. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data.

Efficient Robot Skill Learning via Grounded Simulation Learning, Imitation Learning from Observation, and Off-Policy Reinforcement Learning

Peter Stone
University of Texas at Austin
July 30, 2020
For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot.

Generalized Energy-Based Models

Arthur Gretton
University College London
July 28, 2020
I will introduce Generalized Energy Based Models (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the "generator").

Pontryagin - Thom for orbifold bordism

John Pardon
Princeton University
July 24, 2020
The classical Pontryagin–Thom isomorphism equates manifold bordism groups with corresponding stable homotopy groups. This construction moreover generalizes to the equivariant context. I will discuss work which establishes a Pontryagin--Thom isomorphism for orbispaces (an orbispace is a "space" which is locally modelled on Y/G for Y a space and G a finite group; examples of orbispaces include orbifolds and moduli spaces of pseudo-holomorphic curves). This involves defining a category of orbispectra and an involution of this category extending Spanier--Whitehead duality.

Many Body Scars as a Group Invariant Sector of Hilbert Space

Kiryl Pakrouski
Princeton University
July 24, 2020
We present a class of Hamiltonians H for which a sector of the Hilbert space invariant under a Lie group G, which is not a symmetry of H, possesses the essential properties of many-body scar states. These include the absence of thermalization and the non-decaying “revivals” of special initial states in time evolution. Some of the scar states found in earlier work may be viewed as special cases of our construction.

Priors for Semantic Variables

Yoshua Bengio
Université de Montréal
July 23, 2020
Some of the aspects of the world around us are captured in natural language and refer to semantic high-level variables, which often have a causal role (referring to agents, objects, and actions or intentions). These high-level variables also seem to satisfy very peculiar characteristics which low-level data (like images or sounds) do not share, and it would be good to clarify these characteristics in the form of priors which can guide the design of machine learning systems benefitting from these assumptions.

Graph Nets: The Next Generation

Max Welling
university of Amsterdam
July 21, 2020
In this talk I will introduce our next generation of graph neural networks. GNNs have the property that they are invariant to permutations of the nodes in the graph and to rotations of the graph as a whole. We claim this is unnecessarily restrictive and in this talk we will explore extensions of these GNNs to more flexible equivariant constructions. In particular, Natural Graph Networks for general graphs are globally equivariant under permutations of the nodes but can still be executed through local message passing protocols.