From Speech AI to Finance AI and Back

Li Deng
August 18, 2020
A brief review will be provided first on how deep learning has disrupted speech recognition and language processing industries since 2009. Then connections will be drawn between the techniques (deep learning or otherwise) for modeling speech and language and those for financial markets. Similarities and differences of these two fields will be explored. In particular, three unique technical challenges to financial investment are addressed: extremely low signal-to-noise ratio, extremely strong nonstationarity (with adversarial nature), and heterogeneous big data.

Enhanced Corrections to the Page Curve near Holographic Entanglement Transitions

Xi Dong
University of California, Santa Barbara
August 17, 2020
I will present enhanced corrections to the entanglement entropy of a subsystem in holographic states. These corrections appear near phase transitions in the entanglement entropy due to competing extremal surfaces, and they are holographic avatars of similar corrections previously found in chaotic energy eigenstates. I will first show explicitly how to find these corrections in chaotic eigenstates by summing over contributions of all bulk saddle point solutions, including those that break the replica symmetry, and by making use of fixed-area states.

Latent State Recovery in Reinforcement Learning

John Langford
Microsoft Research
August 13, 2020
There are three core orthogonal problems in reinforcement learning: (1) Crediting actions (2) generalizing across rich observations (3) Exploring to discover the information necessary for learning. Good solutions to pairs of these problems are fairly well known at this point, but solutions for all three are just now being discovered. I’ll discuss several such results and dive into details on a few of them.

Statistical Learning Theory for Modern Machine Learning

John Shawe-Taylor
University College London
August 11, 2020
Probably Approximately Correct (PAC) learning has attempted to analyse the generalisation of learning systems within the statistical learning framework. It has been referred to as a ‘worst case’ analysis, but the tools have been extended to analyse cases where benign distributions mean we can still generalise even if worst case bounds suggest we cannot. The talk will cover the PAC-Bayes approach to analysing generalisation that is inspired by Bayesian inference, but leads to a different role for the prior and posterior distributions.

A Blueprint of Standardized and Composable Machine Learning

Eric Xing
Carnegie Mellon University
August 6, 2020
In handling wide range of experiences ranging from data instances, knowledge, constraints, to rewards, adversaries, and lifelong interplay in an ever-growing spectrum of tasks, contemporary ML/AI research has resulted in thousands of models, learning paradigms, optimization algorithms, not mentioning countless approximation heuristics, tuning tricks, and black-box oracles, plus combinations of all above.

Nonlinear Independent Component Analysis

Aapo Hyvärinen
University of Helsinki
August 4, 2020
Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, e.g. with independent component analysis (ICA) and sparse coding. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data.

Efficient Robot Skill Learning via Grounded Simulation Learning, Imitation Learning from Observation, and Off-Policy Reinforcement Learning

Peter Stone
University of Texas at Austin
July 30, 2020
For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot.

Generalized Energy-Based Models

Arthur Gretton
University College London
July 28, 2020
I will introduce Generalized Energy Based Models (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the "generator").

Pontryagin - Thom for orbifold bordism

John Pardon
Princeton University
July 24, 2020
The classical Pontryagin–Thom isomorphism equates manifold bordism groups with corresponding stable homotopy groups. This construction moreover generalizes to the equivariant context. I will discuss work which establishes a Pontryagin--Thom isomorphism for orbispaces (an orbispace is a "space" which is locally modelled on Y/G for Y a space and G a finite group; examples of orbispaces include orbifolds and moduli spaces of pseudo-holomorphic curves). This involves defining a category of orbispectra and an involution of this category extending Spanier--Whitehead duality.