Understanding and Mitigating the Tradeoff between Robustness and Accuracy

Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, Percy Liang
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7909-7919, 2020.

Abstract

Adversarial training augments the training set with perturbations to improve the robust error (over worst-case perturbations), but it often leads to an increase in the standard error (on unperturbed test inputs). Previous explanations for this tradeoff rely on the assumption that no predictor in the hypothesis class has low standard and robust error. In this work, we precisely characterize the effect of augmentation on the standard error in linear regression when the optimal linear predictor has zero standard and robust error. In particular, we show that the standard error could increase even when the augmented perturbations have noiseless observations from the optimal linear predictor. We then prove that the recently proposed robust self-training (RST) estimator improves robust error without sacrificing standard error for noiseless linear regression. Empirically, for neural networks, we find that RST with different adversarial training methods improves both standard and robust error for random and adversarial rotations and adversarial l_infty perturbations in CIFAR-10.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-raghunathan20a, title = {Understanding and Mitigating the Tradeoff between Robustness and Accuracy}, author = {Raghunathan, Aditi and Xie, Sang Michael and Yang, Fanny and Duchi, John and Liang, Percy}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7909--7919}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/raghunathan20a/raghunathan20a.pdf}, url = {https://proceedings.mlr.press/v119/raghunathan20a.html}, abstract = {Adversarial training augments the training set with perturbations to improve the robust error (over worst-case perturbations), but it often leads to an increase in the standard error (on unperturbed test inputs). Previous explanations for this tradeoff rely on the assumption that no predictor in the hypothesis class has low standard and robust error. In this work, we precisely characterize the effect of augmentation on the standard error in linear regression when the optimal linear predictor has zero standard and robust error. In particular, we show that the standard error could increase even when the augmented perturbations have noiseless observations from the optimal linear predictor. We then prove that the recently proposed robust self-training (RST) estimator improves robust error without sacrificing standard error for noiseless linear regression. Empirically, for neural networks, we find that RST with different adversarial training methods improves both standard and robust error for random and adversarial rotations and adversarial l_infty perturbations in CIFAR-10.} }
Endnote
%0 Conference Paper %T Understanding and Mitigating the Tradeoff between Robustness and Accuracy %A Aditi Raghunathan %A Sang Michael Xie %A Fanny Yang %A John Duchi %A Percy Liang %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-raghunathan20a %I PMLR %P 7909--7919 %U https://proceedings.mlr.press/v119/raghunathan20a.html %V 119 %X Adversarial training augments the training set with perturbations to improve the robust error (over worst-case perturbations), but it often leads to an increase in the standard error (on unperturbed test inputs). Previous explanations for this tradeoff rely on the assumption that no predictor in the hypothesis class has low standard and robust error. In this work, we precisely characterize the effect of augmentation on the standard error in linear regression when the optimal linear predictor has zero standard and robust error. In particular, we show that the standard error could increase even when the augmented perturbations have noiseless observations from the optimal linear predictor. We then prove that the recently proposed robust self-training (RST) estimator improves robust error without sacrificing standard error for noiseless linear regression. Empirically, for neural networks, we find that RST with different adversarial training methods improves both standard and robust error for random and adversarial rotations and adversarial l_infty perturbations in CIFAR-10.
APA
Raghunathan, A., Xie, S.M., Yang, F., Duchi, J. & Liang, P.. (2020). Understanding and Mitigating the Tradeoff between Robustness and Accuracy. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7909-7919 Available from https://proceedings.mlr.press/v119/raghunathan20a.html.

Related Material