School of Mathematics

Nonlinear dimensionality reduction for faster kernel methods in machine learning.

Christopher Musco
Massachusetts Institute of Technology
February 12, 2018

The Random Fourier Features (RFF) method (Rahimi, Recht, NIPS 2007) is one of the most practically successful techniques for accelerating computationally expensive nonlinear kernel learning methods. By quickly computing a low-rank approximation for any shift-invariant kernel matrix, RFF can serve as a preprocessing step to generically accelerate algorithms for kernel ridge regression, kernel clustering, kernel SVMs, and other benchmark data analysis tools.
 

Outlier-Robust Estimation via Sum-of-Squares

Pravesh Kothari
February 6, 2018

We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers. The guarantees of our algorithms improve in many cases significantly over the best previous ones, obtained in recent works. We also show that the guarantees of our algorithms match information-theoretic lower-bounds for the class of distributions we consider. These better guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers.
 

Concentration inequalities for linear cocycles and their applications to problems in dynamics and mathematical physics

Silvius Klein
Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Brazil
January 31, 2018

Given a measure preserving dynamical system, a real-valued observable determines a random process (by composing the observable with the iterates of the transformation). An important topic in ergodic theory is the study of the statistical properties of the corresponding sum process.

Explicit, Epsilon-Balanced Codes Close to the Gilbert-Varshamov Bound

Amnon Ta-Shma
Tel Aviv University
January 30, 2018

I will show an explicit construction of a binary error correcting code with relative distance $\frac{1-\epsilon}{2}$ and relative rate $\epsilon^{2+o(1)}$. This comes close to the Gilbert-Varshamov bound that shows such codes with rate $\epsilon^2$ exist, and theLP lower bound that shows rate $\frac{\epsilon^2}{\log \frac{1}{\epsilon}}$ is necessary. Previous explicit constructions had rate about$\epsilon^3$, and this is the first explicit construction to get that close to the Gilbert-Varshamov bound.

This talk will have two parts, on Monday and Tuesday.

Explicit, Epsilon-Balanced Codes Close to the Gilbert-Varshamov Bound

Amnon Ta-Shma
Tel Aviv University
January 29, 2018

I will show an explicit construction of a binary error correcting code with relative distance $\frac{1-\epsilon}{2}$ and relative rate $\epsilon^{2+o(1)}$. This comes close to the Gilbert-Varshamov bound that shows such codes with rate $\epsilon^2$ exist, and theLP lower bound that shows rate $\frac{\epsilon^2}{\log \frac{1}{\epsilon}}$ is necessary. Previous explicit constructions had rate about$\epsilon^3$, and this is the first explicit construction to get that close to the Gilbert-Varshamov bound.

This talk will have two parts, on Monday and Tuesday.