This talk discusses three aspects of deep learning from a statistical perspective: interpolation, optimality and sparsity. The first one attempts to interpret the double descent phenomenon by precisely characterizing a U-shaped curve within the “over-fitting regime,” while the second one focuses on the statistical optimality of neural network classification in a student-teacher framework. This talk is concluded by proposing sparsity induced training of neural network with statistical guarantee.
In this talk, I'll discuss the role that Lie algebras play in algebraic topology and motivate the development of a "homotopy coherent" version of the theory. I'll also explain an "equation-free" formulation of the classical theory of Lie algebras, which emerges as a concrete byproduct.