School of Mathematics

Inscribing Rectangles in Jordan Loops

Rich Schwartz
Brown University
October 14, 2019

I'll show a graphical user interface I wrote which explores the problem of inscribing rectangles in Jordan loops.  The motivation behind this is the notorious Square Peg Conjecture of Toeplitz, from 1911.

I did not manage to solve this problem, but I did get the result that at most 4 points of any Jordan loop are vertices of inscribed  rectangles. I will sketch a proof of this result, mostly through visual demos, and also I will explain two other theorems about inscribed rectangles which at least bear a resemblance to theorems in symplectic geometry.

On the (in)stability of the identity map in optimal transportation

Yash Jhaveri
Member, School of Mathematics
October 14, 2019

In the optimal transport problem, it is well-known that the geometry of the target domain plays a crucial role in the regularity of the optimal transport. In the quadratic cost case, for instance, Caffarelli showed that having a convex target domain is essential in guaranteeing the optimal transport’s continuity. In this talk, we shall explore how, quantitatively, important convexity is in producing continuous optimal transports.

Designing Fast and Robust Learning Algorithms

Yu Cheng
University of Illinois at Chicago
October 9, 2019

Most people interact with machine learning systems on a daily basis. Such interactions often happen in strategic environments where people have incentives to manipulate the learning algorithms. As machine learning plays a more prominent role in our society, it is important to understand whether existing algorithms are vulnerable to adversarial attacks and, if so, design new algorithms that are robust in these strategic environments. 


Unsupervised Ensemble Learning

Boaz Nadler
Weizmann Institute of Science; Member, School of Mathematics
October 8, 2019

In various applications, one is given the advice or predictions of several classifiers of unknown reliability, over multiple questions or queries. This scenario is different from standard supervised learning where classifier accuracy can be assessed from available labeled training or validation data, and raises several questions: given only the predictions of several classifiers of unknown accuracies, over a large set of unlabeled test data, is it possible to

a) reliably rank them, and