Computer Science and Discrete Mathematics (CSDM)

Theoretical Computer Science and Discrete Mathematics

On gradient complexity of measures on the discrete cube

Ronen Eldan
Weizmann Institute of Science
December 12, 2016
The motivating question for this talk is: What does a sparse Erdős–Rényi random graph, conditioned to have twice the number of triangles than the expected number, typically look like? Motivated by this question, In 2014, Chatterjee and Dembo introduced a framework for obtaining Large Deviation Principles (LDP) for nonlinear functions of Bernoulli random variables (this followed an earlier work of Chatterjee-Varadhan which used limit graph theory to answer this question in the dense regime).

Approximate constraint satisfaction requires sub-exponential size linear programs

Pravesh Kothari
Member, School of Mathematics
December 6, 2016
We show that for constraint satisfaction problems (CSPs), sub-exponential size linear programming relaxations are as powerful as $n^{\Omega(1)}$-rounds of the Sherali-Adams linear programming hierarchy. As a corollary, we obtain sub-exponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAX-CUT and MAX-3SAT.

On the number of ordinary lines determined by sets in complex space

Shubhangi Saraf
Rutgers University
December 5, 2016
Consider a set of $n$ points in $\mathbb R^d$. The classical theorem of Sylvester-Gallai says that, if the points are not all collinear then there must be a line through exactly two of the points. Let us call such a line an "ordinary line". In a recent result, Green and Tao were able to give optimal linear lower bounds (roughly $n/2$) on the number of ordinary lines determined $n$ non-collinear points in $\mathbb R^d$. In this talk we will consider the analog over the complex numbers.

Stochastic block models and probabilistic reductions

Emmanuel Abbe
Princeton University
November 28, 2016
The stochastic block model (SBM) is a random graph model with planted clusters. It has been popular to model unsupervised learning problems, inhomogeneous random graphs and to study statistical versus computational tradeoffs. This talk overviews the recent developments that establish the thresholds for SBMs, the algorithms that achieve the thresholds, and the techniques (genie reduction, graph splitting, nonbacktracking propagation) that are likely to apply beyond SBMs.

Theory of accelerated methods

Zeyuan Allen-Zhu
Member, School of Mathematics
November 22, 2016

In this talk I will show how to derive the fastest coordinate descent method [1] and the fastest stochastic gradient descent method [2], both from the linear-coupling framework [3]. I will relate them to linear system solving, conjugate gradient method, the Chebyshev approximation theory, and raise several open questions at the end. No prior knowledge is required on first-order methods.

On the effect of randomness on planted 3-coloring models

Uri Feige
Weizmann Institute of Science
November 21, 2016
The random planted 3-coloring model generates a 3-colorable graph $G$ by first generating a random host graph $H$ of average degree $d$, and then planting in it a random 3-coloring (by giving each vertex a random color and dropping the monochromatic edges). For a sufficiently large constant $c$, Alon and Kahale [SICOMP 1997] presented a spectral algorithm that finds (with high probability) the planted 3-coloring of such graphs whenever $d > c\log n$.

Non-malleable extractors for constant depth circuits, and affine functions

Eshan Chattopadhyay
Member, School of Mathematics
November 15, 2016
Seeded and seedless non-malleable extractors are non-trivial generalizations of the more commonly studied seeded and seedless extractors. The original motivation for constructing such non-malleable extractors are from applications to cryptography (privacy amplification and tamper-resilient cryptography). Interestingly, explicitly constructing non-malleable extractors have led to many new connections and progress in pseudoranomness as well.

The mathematics of natural algorithms

Bernard Chazelle
Princeton University
November 14, 2016
I will review some of the recent techniques we've used in our study of natural algorithms. These include Dirichlet series for matrix products, mean-field approximations in opinion dynamics, graph sequence grammars, and tools for renormalizing network-based dynamical systems. If time permits, I will also discuss anti-mixing techniques for self-sustaining iterated learning. The talk will be self-contained and non-technical.