The Information Bottleneck (IB) is an information theoretic framework for optimal representation learning. It stems from the problem of finding minimal sufficient statistics in supervised learning, but has insightful implications for Deep Learning. In particular, its the only theory that gives concrete predictions on the different representations in each layer and their potential computational benefit. I will review the theory and its new version, the dual Information Bottleneck, related to the variational Information Bottleneck which is gaining practical popularity.
Symplectic geometry, and its close relative contact geometry, are geometries closely tied to complex geometry, smooth topology, and mathematical physics. The h-principle is a general method used for construction of smooth geometric objects satisfying various underdetermined properties. In the symplectic context, h-principles typically give constructions of surprising exotica, and methods for detecting the basic flexible objects. We survey a number of results from the previous decade.
In this talk, I'll discuss the role that Lie algebras play in algebraic topology and motivate the development of a "homotopy coherent" version of the theory. I'll also explain an "equation-free" formulation of the classical theory of Lie algebras, which emerges as a concrete byproduct.