In this talk, I will present our recent results to tackle these issues. I will first discuss generalizability of a class of provable defenses based on randomized smoothing to various Lp and non-Lp attack models. Then, I will present adversarial attacks and defenses for a novel “perceptual” adversarial threat model. Remarkably, the defense against perceptual threat model generalizes well against many types of unforeseen Lp and non-Lp adversarial attacks.
This talk is based on joint works with Alex Levine, Sahil Singla, Cassidy Laidlaw, Aounon Kumar and Tom Goldstein.