I'll show a graphical user interface I wrote which explores the problem of inscribing rectangles in Jordan loops. The motivation behind this is the notorious Square Peg Conjecture of Toeplitz, from 1911.
I did not manage to solve this problem, but I did get the result that at most 4 points of any Jordan loop are vertices of inscribed rectangles. I will sketch a proof of this result, mostly through visual demos, and also I will explain two other theorems about inscribed rectangles which at least bear a resemblance to theorems in symplectic geometry.
In the optimal transport problem, it is well-known that the geometry of the target domain plays a crucial role in the regularity of the optimal transport. In the quadratic cost case, for instance, Caffarelli showed that having a convex target domain is essential in guaranteeing the optimal transport’s continuity. In this talk, we shall explore how, quantitatively, important convexity is in producing continuous optimal transports.
The Story of Trigonometry: Revolutions in the Heavens, and on Earth
This talk is about qualitative properties of the underlying scheme of Rapoport-Zink formal moduli spaces of p-divisible groups, resp. Shtukas. We single out those cases when the dimension of this underlying scheme is zero, resp. those where the dimension is maximal possible. The model case for the first alternative is the Lubin-Tate moduli space, and the model case for the second alternative is the Drinfeld moduli space. We exhibit a complete list in both cases.
Most people interact with machine learning systems on a daily basis. Such interactions often happen in strategic environments where people have incentives to manipulate the learning algorithms. As machine learning plays a more prominent role in our society, it is important to understand whether existing algorithms are vulnerable to adversarial attacks and, if so, design new algorithms that are robust in these strategic environments.
In various applications, one is given the advice or predictions of several classifiers of unknown reliability, over multiple questions or queries. This scenario is different from standard supervised learning where classifier accuracy can be assessed from available labeled training or validation data, and raises several questions: given only the predictions of several classifiers of unknown accuracies, over a large set of unlabeled test data, is it possible to
a) reliably rank them, and