Computer Science and Discrete Mathematics (CSDM)

Theoretical Computer Science and Discrete Mathematics

Quantifying tradeoffs between fairness and accuracy in online learning

Aaron Roth
University of Pennsylvania
January 30, 2017
In this talk, I will discuss our recent efforts to formalize a particular notion of “fairness” in online decision making problems, and study its costs on the achievable learning rate of the algorithm. Our focus for most of the talk will be on the “contextual bandit” problem, which models the following scenario. Every day, applicants from different populations submit loan applications to a lender, who must select a subset of them to give loans to.

Active learning with "simple" membership queries

Shachar Lovett
University of California, San Diego
January 23, 2017
An active learning algorithm for a classification problem has access to many unlabelled samples. The algorithm asks for the labels of a small number of samples, carefully chosen, such that that it can leverage this information to correctly label most of the unlabelled samples. It is motivated by many real world applications, where it is much easier and cheaper to obtain access to unlabelled data compared to labelled data. The main question is: can active learning algorithms out-perform classic passive learning algorithms?

On gradient complexity of measures on the discrete cube

Ronen Eldan
Weizmann Institute of Science
December 12, 2016
The motivating question for this talk is: What does a sparse Erdős–Rényi random graph, conditioned to have twice the number of triangles than the expected number, typically look like? Motivated by this question, In 2014, Chatterjee and Dembo introduced a framework for obtaining Large Deviation Principles (LDP) for nonlinear functions of Bernoulli random variables (this followed an earlier work of Chatterjee-Varadhan which used limit graph theory to answer this question in the dense regime).

Approximate constraint satisfaction requires sub-exponential size linear programs

Pravesh Kothari
Member, School of Mathematics
December 6, 2016
We show that for constraint satisfaction problems (CSPs), sub-exponential size linear programming relaxations are as powerful as $n^{\Omega(1)}$-rounds of the Sherali-Adams linear programming hierarchy. As a corollary, we obtain sub-exponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAX-CUT and MAX-3SAT.

On the number of ordinary lines determined by sets in complex space

Shubhangi Saraf
Rutgers University
December 5, 2016
Consider a set of $n$ points in $\mathbb R^d$. The classical theorem of Sylvester-Gallai says that, if the points are not all collinear then there must be a line through exactly two of the points. Let us call such a line an "ordinary line". In a recent result, Green and Tao were able to give optimal linear lower bounds (roughly $n/2$) on the number of ordinary lines determined $n$ non-collinear points in $\mathbb R^d$. In this talk we will consider the analog over the complex numbers.

Stochastic block models and probabilistic reductions

Emmanuel Abbe
Princeton University
November 28, 2016
The stochastic block model (SBM) is a random graph model with planted clusters. It has been popular to model unsupervised learning problems, inhomogeneous random graphs and to study statistical versus computational tradeoffs. This talk overviews the recent developments that establish the thresholds for SBMs, the algorithms that achieve the thresholds, and the techniques (genie reduction, graph splitting, nonbacktracking propagation) that are likely to apply beyond SBMs.