Most people interact with machine learning systems on a daily basis. Such interactions often happen in strategic environments where people have incentives to manipulate the learning algorithms. As machine learning plays a more prominent role in our society, it is important to understand whether existing algorithms are vulnerable to adversarial attacks and, if so, design new algorithms that are robust in these strategic environments.
In this talk, I will focus on two lines of my work on leveraging these recent advancements to design fast and provably robust learning algorithms: making non-convex matrix completion approaches robust against semi-random adversaries, and designing robust high-dimensional statistical estimators that can be computed almost as efficiently as their non-robust counterparts. Most of the talk is based on joint work with Ilias Diakonikolas and Rong Ge.