The Information Bottleneck (IB) is an information theoretic framework for optimal representation learning. It stems from the problem of finding minimal sufficient statistics in supervised learning, but has insightful implications for Deep Learning. In particular, its the only theory that gives concrete predictions on the different representations in each layer and their potential computational benefit. I will review the theory and its new version, the dual Information Bottleneck, related to the variational Information Bottleneck which is gaining practical popularity.
Suppose L is a link with n components and the rank of Kh(L;Z/2) is 2^n, we show that L can be obtained by disjoint unions and connected sums of Hopf links and unknots. This result gives a positive answer to a question asked by Batson-Seed, and generalizes the unlink detection theorem of Khovanov homology by Hedden-Ni and Batson-Seed. The proof relies on a new excision formula for the singular instanton Floer homology introduced by Kronheimer and Mrowka.
This is joint work with Yi Xie.
Who writes global history? How and for whom? And why now?
Bring your questions and thoughts, and join the conversation.
Professor of History, Free University of Berlin
in conversation with