Modularity, Attention and Credit Assignment: Efficient information dispatching in neural computations

Anirudh Goyal
April 16, 2020
Physical processes in the world often have a modular structure, with complexity emerging through combinations of simpler subsystems. Machine learning seeks to uncover and use regularities in the physical world. Although these regularities manifest themselves as statistical dependencies, they are ultimately due to dynamic processes governed by physics. These processes are often independent and only interact sparsely..Despite this, most machine learning models employ the opposite inductive bias, i.e., that all processes interact.

Tradeoffs between Robustness and Accuracy

Percy Liang
April 16, 2020
Standard machine learning produces models that are highly accurate on average but that degrade dramatically when the test distribution deviates from the training distribution. While one can train robust models, this often comes at the expense of standard accuracy (on the training distribution). We study this tradeoff in two settings, adversarial examples and minority groups, creating simple examples which highlight generalization issues as a major source of this tradeoff.

Steps towards more human-like learning in machines

Josh Tenenbaum
April 16, 2020
There are several broad insights we can draw from computational models of human cognition in order to build more human-like forms of machine learning. (1) The brain has a great deal of built-in structure, yet still tremendous need and potential for learning. Instead of seeing built-in structure and learning as in tension, we should be thinking about how to learn effectively with more and richer forms of structure. (2) The most powerful forms of human knowledge are symbolic and often causal and probabilistic.

Local-global compatibility in the crystalline case

Ana Caraiani
Imperial College
April 16, 2020
Let F be a CM field. Scholze constructed Galois representations associated to classes in the cohomology of locally symmetric spaces for GL_n/F with p-torsion coefficients. These Galois representations are expected to satisfy local-global compatibility at primes above p. Even the precise formulation of this property is subtle in general, and uses Kisin’s potentially semistable deformation rings. However, this property is crucial for proving modularity lifting theorems. I will discuss joint work with J.

Interpretability for Everyone

Been Kim
April 16, 2020
In this talk, I would like to share some of my reflections on the progress made in the field of interpretable machine learning. We will reflect on where we are going as a field, and what are the things that we need to be aware of to make progress. With that perspective, I will then discuss some of my work on 1) sanity checking popular methods and 2) developing more lay person-friendly interpretability methods. I will also share some open theoretical questions that may help us move forward.

Do Simpler Models Exist and How Can We Find Them?

Cynthia Rudin
April 16, 2020
While the trend in machine learning has tended towards more complex hypothesis spaces, it is not clear that this extra complexity is always necessary or helpful for many domains. In particular, models and their predictions are often made easier to understand by adding interpretability constraints. These constraints shrink the hypothesis space; that is, they make the model simpler. Statistical learning theory suggests that generalization may be improved as a result as well. However, adding extra constraints can make optimization (exponentially) harder.

Towards Robust Artificial Intelligence

Pushmeet Kohli
April 15, 2020
Deep learning has led to rapid progress being made in the field of machine learning and artificial intelligence, leading to dramatically improved solutions of many challenging problems such as image understanding, speech recognition, and control systems. Despite these remarkable successes, researchers have observed some intriguing and troubling aspects of the behaviour of these models. A case in point is the presence of adversarial examples which make learning based systems fail in unexpected ways.

A snapshot of few-shot classification

Richard Zemel
April 15, 2020
Few-shot classification, the task of adapting a classifier to unseen classes given a small labeled dataset, is an important step on the path toward human-like machine learning. I will present some of the key advances in this area, and will then focus on the fundamental issue of overfitting in the few-shot scenario. Bayesian methods are well-suited to tackling this issue because they allow practitioners to specify prior beliefs and update those beliefs in light of observed data.

Iterative Random Forests (iRF) with applications to genomics and precision medicine

Bin Yu
April 15, 2020
Genomics has revolutionized biology, enabling the interrogation of whole transcriptomes, genome-wide binding sites for proteins, and many other molecular processes. However, individual genomic assays measure elements that interact in vivo as components of larger molecular machines. Understanding how these high-order interactions drive gene expression presents a substantial statistical challenge.