Generalizable Adversarial Robustness to Unforeseen Attacks

Generalizable Adversarial Robustness to Unforeseen Attacks - Soheil Feizi

Soheil Feizi
University of Maryland
June 23, 2020
In the last couple of years, a lot of progress has been made to enhance robustness of models against adversarial attacks. However, two major shortcomings still remain: (i) practical defenses are often vulnerable against strong “adaptive” attack algorithms, and (ii) current defenses have poor generalization to “unforeseen” attack threat models (the ones not used in training).

In this talk, I will present our recent results to tackle these issues. I will first discuss generalizability of a class of provable defenses based on randomized smoothing to various Lp and non-Lp attack models. Then, I will present adversarial attacks and defenses for a novel “perceptual” adversarial threat model. Remarkably, the defense against perceptual threat model generalizes well against many types of unforeseen Lp and non-Lp adversarial attacks.

This talk is based on joint works with Alex Levine, Sahil Singla, Cassidy Laidlaw, Aounon Kumar and Tom Goldstein.