Calibrated Surrogate Losses for Adversarially Robust Classification

Han Bao, Clay Scott, Masashi Sugiyama
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:408-451, 2020.

Abstract

Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns. This problem is often formulated via a minimax objective, where the target loss is the worst-case value of the 0-1 loss subject to a bound on the size of perturbation. Recent work has proposed convex surrogates for the adversarial 0-1 loss, in an effort to make optimization more tractable. In this work, we consider the question of which surrogate losses are \emph{calibrated} with respect to the adversarial 0-1 loss, meaning that minimization of the former implies minimization of the latter. We show that no convex surrogate loss is calibrated with respect to the adversarial 0-1 loss when restricted to the class of linear models. We further introduce a class of nonconvex losses and offer necessary and sufficient conditions for losses in this class to be calibrated.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-bao20a, title = {Calibrated Surrogate Losses for Adversarially Robust Classification}, author = {Bao, Han and Scott, Clay and Sugiyama, Masashi}, pages = {408--451}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/bao20a/bao20a.pdf}, url = {http://proceedings.mlr.press/v125/bao20a.html}, abstract = { Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns. This problem is often formulated via a minimax objective, where the target loss is the worst-case value of the 0-1 loss subject to a bound on the size of perturbation. Recent work has proposed convex surrogates for the adversarial 0-1 loss, in an effort to make optimization more tractable. In this work, we consider the question of which surrogate losses are \emph{calibrated} with respect to the adversarial 0-1 loss, meaning that minimization of the former implies minimization of the latter. We show that no convex surrogate loss is calibrated with respect to the adversarial 0-1 loss when restricted to the class of linear models. We further introduce a class of nonconvex losses and offer necessary and sufficient conditions for losses in this class to be calibrated. } }
Endnote
%0 Conference Paper %T Calibrated Surrogate Losses for Adversarially Robust Classification %A Han Bao %A Clay Scott %A Masashi Sugiyama %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-bao20a %I PMLR %J Proceedings of Machine Learning Research %P 408--451 %U http://proceedings.mlr.press %V 125 %W PMLR %X Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns. This problem is often formulated via a minimax objective, where the target loss is the worst-case value of the 0-1 loss subject to a bound on the size of perturbation. Recent work has proposed convex surrogates for the adversarial 0-1 loss, in an effort to make optimization more tractable. In this work, we consider the question of which surrogate losses are \emph{calibrated} with respect to the adversarial 0-1 loss, meaning that minimization of the former implies minimization of the latter. We show that no convex surrogate loss is calibrated with respect to the adversarial 0-1 loss when restricted to the class of linear models. We further introduce a class of nonconvex losses and offer necessary and sufficient conditions for losses in this class to be calibrated.
APA
Bao, H., Scott, C. & Sugiyama, M.. (2020). Calibrated Surrogate Losses for Adversarially Robust Classification. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:408-451

Related Material