## Tightening information-theoretic generalization bounds with data-dependent estimates with an application to SGLD

Daniel Roy

University of Toronto

October 15, 2019

Anima Anandkumar

Caltech

October 15, 2019

Amir Asadi, Dimitris Kalimeris

October 15, 2019

Ben Rossman

University of Toronto

October 14, 2019

Boaz Nadler

Weizmann Institute of Science; Member, School of Mathematics

October 14, 2019

Rich Schwartz

Brown University

October 14, 2019

I'll show a graphical user interface I wrote which explores the problem of inscribing rectangles in Jordan loops. The motivation behind this is the notorious Square Peg Conjecture of Toeplitz, from 1911.

I did not manage to solve this problem, but I did get the result that at most 4 points of any Jordan loop are vertices of inscribed rectangles. I will sketch a proof of this result, mostly through visual demos, and also I will explain two other theorems about inscribed rectangles which at least bear a resemblance to theorems in symplectic geometry.

Yash Jhaveri

Member, School of Mathematics

October 14, 2019

In the optimal transport problem, it is well-known that the geometry of the target domain plays a crucial role in the regularity of the optimal transport. In the quadratic cost case, for instance, Caffarelli showed that having a convex target domain is essential in guaranteeing the optimal transport’s continuity. In this talk, we shall explore how, quantitatively, important convexity is in producing continuous optimal transports.

Lior Alon

Technion

October 11, 2019

Yu Cheng

University of Illinois at Chicago

October 9, 2019

Most people interact with machine learning systems on a daily basis. Such interactions often happen in strategic environments where people have incentives to manipulate the learning algorithms. As machine learning plays a more prominent role in our society, it is important to understand whether existing algorithms are vulnerable to adversarial attacks and, if so, design new algorithms that are robust in these strategic environments.

Boaz Nadler

Weizmann Institute of Science; Member, School of Mathematics

October 8, 2019

In various applications, one is given the advice or predictions of several classifiers of unknown reliability, over multiple questions or queries. This scenario is different from standard supervised learning where classifier accuracy can be assessed from available labeled training or validation data, and raises several questions: given only the predictions of several classifiers of unknown accuracies, over a large set of unlabeled test data, is it possible to

a) reliably rank them, and