Computer Science and Discrete Mathematics (CSDM)

Theoretical Computer Science and Discrete Mathematics

Recent advances in high dimensional robust statistics

Daniel Kane
University of California, San Diego
December 11, 2017
It is classically understood how to learn the parameters of a Gaussian even in high dimensions from independent samples. However, estimators like the sample mean are very fragile to noise. In particular, a single corrupted sample can arbitrarily distort the sample mean. More generally we would like to be able to estimate the parameters of a distribution even if a small fraction of the samples are corrupted, potentially adversarially.

A practical guide to deep learning

Richard Zemel
University of Toronto; Visitor, School of Mathematics
November 21, 2017
Neural networks have been around for many decades. An important question is what has led to their recent surge in performance and popularity. I will start with an introduction to deep neural networks, covering the terminology and standard approaches to constructing networks. I will focus on the two primary, very successful forms of networks: deep convolutional nets, as originally developed for vision problems; and recurrent networks, for speech and language tasks.

Learning models: connections between boosting, hard-core distributions, dense models, GAN, and regularity II

Russell Impagliazzo
University of California, San Diego
November 14, 2017

A theme that cuts across many domains of computer science and mathematics is to find simple representations of complex mathematical objects such as graphs, functions, or distributions on data. These representations need to capture how the object interacts with a class of tests, and to approximately determine the outcome of these tests.

Learning models: connections between boosting, hard-core distributions, dense models, GAN, and regularity I

Russell Impagliazzo
University of California, San Diego
November 13, 2017

A theme that cuts across many domains of computer science and mathematics is to find simple representations of complex mathematical objects such as graphs, functions, or distributions on data. These representations need to capture how the object interacts with a class of tests, and to approximately determine the outcome of these tests.

Language edit distance, $(\min,+)$-matrix multiplication & beyond

Barna Saha
University of Massachusetts, Amherst
November 6, 2017

The language edit distance is a significant generalization of two basic problems in computer science: parsing and string edit distance computation. Given any context free grammar, it computes the minimum number of insertions, deletions and substitutions required to convert a given input string into a valid member of the language. In 1972, Aho and Peterson gave a dynamic programming algorithm that solves this problem in time cubic in the string length. Despite its vast number of applications, in forty years there has been no improvement over this running time.