Computer Science and Discrete Mathematics (CSDM)

Theoretical Computer Science and Discrete Mathematics

A practical guide to deep learning

Richard Zemel
University of Toronto; Visitor, School of Mathematics
November 21, 2017
Neural networks have been around for many decades. An important question is what has led to their recent surge in performance and popularity. I will start with an introduction to deep neural networks, covering the terminology and standard approaches to constructing networks. I will focus on the two primary, very successful forms of networks: deep convolutional nets, as originally developed for vision problems; and recurrent networks, for speech and language tasks.

Learning models: connections between boosting, hard-core distributions, dense models, GAN, and regularity II

Russell Impagliazzo
University of California, San Diego
November 14, 2017

A theme that cuts across many domains of computer science and mathematics is to find simple representations of complex mathematical objects such as graphs, functions, or distributions on data. These representations need to capture how the object interacts with a class of tests, and to approximately determine the outcome of these tests.

Learning models: connections between boosting, hard-core distributions, dense models, GAN, and regularity I

Russell Impagliazzo
University of California, San Diego
November 13, 2017

A theme that cuts across many domains of computer science and mathematics is to find simple representations of complex mathematical objects such as graphs, functions, or distributions on data. These representations need to capture how the object interacts with a class of tests, and to approximately determine the outcome of these tests.

Language edit distance, $(\min,+)$-matrix multiplication & beyond

Barna Saha
University of Massachusetts, Amherst
November 6, 2017

The language edit distance is a significant generalization of two basic problems in computer science: parsing and string edit distance computation. Given any context free grammar, it computes the minimum number of insertions, deletions and substitutions required to convert a given input string into a valid member of the language. In 1972, Aho and Peterson gave a dynamic programming algorithm that solves this problem in time cubic in the string length. Despite its vast number of applications, in forty years there has been no improvement over this running time.

Fooling intersections of low-weight halfspaces

Rocco Servedio
Columbia University
October 30, 2017

A weight-$t$ halfspace is a Boolean function $f(x)=\mathrm{sign}(w_1 x_1 + \cdots + w_n x_n - \theta)$ where each $w_i$ is an integer in $\{-t,\dots,t\}.$  We give an explicit pseudorandom generator that $\delta$-fools any intersection of $k$ weight-$t$ halfspaces with seed length poly$(\log n, \log k,t,1/\delta)$. In particular, our result gives an explicit PRG that fools any intersection of any quasipoly$(n)$ number of halfspaces of any polylog$(n)$ weight to any $1/$polylog$(n)$ accuracy using seed length polylog$(n).$