## Sketching and embedding are equivalent for norms

Alex Andoni

Columbia University

January 31, 2017

Alex Andoni

Columbia University

January 31, 2017

Aaron Roth

University of Pennsylvania

January 30, 2017

In this talk, I will discuss our recent efforts to formalize a particular notion of “fairness” in online decision making problems, and study its costs on the achievable learning rate of the algorithm. Our focus for most of the talk will be on the “contextual bandit” problem, which models the following scenario. Every day, applicants from different populations submit loan applications to a lender, who must select a subset of them to give loans to.

Shachar Lovett

University of California, San Diego

January 24, 2017

Shachar Lovett

University of California, San Diego

January 23, 2017

An active learning algorithm for a classification problem has access to many unlabelled samples. The algorithm asks for the labels of a small number of samples, carefully chosen, such that that it can leverage this information to correctly label most of the unlabelled samples. It is motivated by many real world applications, where it is much easier and cheaper to obtain access to unlabelled data compared to labelled data. The main question is: can active learning algorithms out-perform classic passive learning algorithms?

Jordan Ellenberg

University of Wisconsin

January 17, 2017

Pravesh Kothari

Member, School of Mathematics

December 13, 2016

Ronen Eldan

Weizmann Institute of Science

December 12, 2016

The motivating question for this talk is: What does a sparse Erdős–Rényi random graph, conditioned to have twice the number of triangles than the expected number, typically look like? Motivated by this question, In 2014, Chatterjee and Dembo introduced a framework for obtaining Large Deviation Principles (LDP) for nonlinear functions of Bernoulli random variables (this followed an earlier work of Chatterjee-Varadhan which used limit graph theory to answer this question in the dense regime).

Pravesh Kothari

Member, School of Mathematics

December 6, 2016

We show that for constraint satisfaction problems (CSPs), sub-exponential size linear programming relaxations are as powerful as $n^{\Omega(1)}$-rounds of the Sherali-Adams linear programming hierarchy. As a corollary, we obtain sub-exponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAX-CUT and MAX-3SAT.

Shubhangi Saraf

Rutgers University

December 5, 2016

Consider a set of $n$ points in $\mathbb R^d$. The classical theorem of Sylvester-Gallai says that, if the points are not all collinear then there must be a line through exactly two of the points. Let us call such a line an "ordinary line". In a recent result, Green and Tao were able to give optimal linear lower bounds (roughly $n/2$) on the number of ordinary lines determined $n$ non-collinear points in $\mathbb R^d$. In this talk we will consider the analog over the complex numbers.

Orit Raz

Member, School of Mathematics

November 29, 2016

In this talk I will review some of the classical (and fundamental) results in the theory of graph rigidity.