We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16].
In a recent preprint with Sug Woo Shin (https://arxiv.org/abs/1609.04223) I construct Galois representations corresponding for cohomological cuspidal automorphic representations of general symplectic groups over totally real number fields under the local hypothesis that there is a Steinberg component. In this talk I will explain some parts of this construction that involve the eigenvariety.
Recently we proved with Durcik, Kovac, Skreb variational estimates providing sharp quantitative norm convergence results for bilinear ergodic averages with respect to two commuting transformations. The proof uses so called twisted technology developed in recent years for estimating bi-parameter paraproducts. Another application of the technique is to cancellation results for simplex Hilbert transforms.