Theoretical Machine Learning

Generalizable Adversarial Robustness to Unforeseen Attacks

Soheil Feizi
University of Maryland
June 23, 2020
In the last couple of years, a lot of progress has been made to enhance robustness of models against adversarial attacks. However, two major shortcomings still remain: (i) practical defenses are often vulnerable against strong “adaptive” attack algorithms, and (ii) current defenses have poor generalization to “unforeseen” attack threat models (the ones not used in training).

Latent Stochastic Differential Equations for Irregularly-Sampled Time Series

David Duvenaud
University of Toronto
April 30, 2020
Much real-world data is sampled at irregular intervals, but most time series models require regularly-sampled data. Continuous-time models address this problem, but until now only deterministic (ODE) models or linear-Gaussian models were efficiently trainable with millions of parameters. We construct a scalable algorithm for computing gradients of samples from stochastic differential equations (SDEs), and for gradient-based stochastic variational inference in function space, all with the use of adaptive black-box SDE solvers.

Deep Generative models and Inverse Problems

Alexandros Dimakis
University of Texas at Austin
April 23, 2020
Modern deep generative models like GANs, VAEs and invertible flows are showing amazing results on modeling high-dimensional distributions, especially for images. We will show how they can be used to solve inverse problems by generalizing compressed sensing beyond sparsity. We will present the general framework, new results and open problems in this space.

Preference Modeling with Context-Dependent Salient Features

Laura Balzano
University of Michigan; Member, School of Mathematics
February 27, 2020
This talk considers the preference modeling problem and addresses the fact that pairwise comparison data often reflects irrational choice, e.g. intransitivity. Our key observation is that two items compared in isolation from other items may be compared based on only a salient subset of features. Formalizing this idea, I will introduce our proposal for a “salient feature preference model” and discuss sample complexity results for learning the parameters of our model and the underlying ranking with maximum likelihood estimation.

Compositional inductive biases in human function learning

Samuel J. Gershman
Harvard University
January 14, 2020
This talk presents evidence that humans learn complex functions by harnessing compositionality: complex structure is decomposed into simpler building blocks. I formalize this idea in the framework of Bayesian nonparametric regression using a grammar over Gaussian process kernels, and compare this approach with other structure learning approaches. People consistently chose compositional (over non-compositional) extrapolations and interpolations of functions.

How will we do mathematics in 2030 ?

Michael R. Douglas
Simons Center for Geometry and Physics, Stony Brook
December 17, 2019

We make the case that over the coming decade, computer assisted reasoning will become far more widely used in the mathematical sciences. This includes interactive and automatic theorem verification, symbolic algebra, and emerging technologies such as formal knowledge repositories, semantic search and intelligent textbooks.

Nonconvex Minimax Optimization

Chi Jin
Princeton University; Member, School of Mathematics
November 20, 2019

Minimax optimization, especially in its general nonconvex formulation, has found extensive applications in modern machine learning, in settings such as generative adversarial networks (GANs) and adversarial training. It brings a series of unique challenges in addition to those that already persist in nonconvex minimization problems. This talk will cover a set of new phenomena, open problems, and recent results in this emerging field.