Nonlinear dimensionality reduction for faster kernel methods in machine learning.

The Random Fourier Features (RFF) method (Rahimi, Recht, NIPS 2007) is one of the most practically successful techniques for accelerating computationally expensive nonlinear kernel learning methods. By quickly computing a low-rank approximation for any shift-invariant kernel matrix, RFF can serve as a preprocessing step to generically accelerate algorithms for kernel ridge regression, kernel clustering, kernel SVMs, and other benchmark data analysis tools.

In this talk, I will connect the RFF method to leverage score sampling, which is a central technique in recent advances on fast randomized methods in numerical linear algebra. This connection allows us to obtain a better theoretical understanding of RFF, and moreover, it gives a concrete approach for improving the method. By explicitly bounding appropriately defined “Fourier leverage scores” of the kernel function, we obtain a simple modified RFF method that yields both theoretical and empirical improvements in learning applications.

Our bounds rely on constructing low-energy approximations for sparse Fourier functions. While leading to an improved algorithm, we believe they are far from optimal. I will discuss potential improvements through connections with work on noisy interpolation of sparse Fourier signals and low-degree polynomials.

This talk is based on joint work with Haim Avron, Michael Kapralov, Cameron Musco, Ameya Velingker, and Amir Zandieh.

Date

Speakers

Christopher Musco

Affiliation

Massachusetts Institute of Technology