One-vs-the-Rest Loss to Focus on Important Samples in Adversarial Training

Sekitoshi Kanai, Shin’Ya Yamaguchi, Masanori Yamada, Hiroshi Takahashi, Kentaro Ohno, Yasutoshi Ida
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:15669-15695, 2023.

Abstract

This paper proposes a new loss function for adversarial training. Since adversarial training has difficulties, e.g., necessity of high model capacity, focusing on important data points by weighting cross-entropy loss has attracted much attention. However, they are vulnerable to sophisticated attacks, e.g., Auto-Attack. This paper experimentally reveals that the cause of their vulnerability is their small margins between logits for the true label and the other labels. Since neural networks classify the data points based on the logits, logit margins should be large enough to avoid flipping the largest logit by the attacks. Importance-aware methods do not increase logit margins of important samples but decrease those of less-important samples compared with cross-entropy loss. To increase logit margins of important samples, we propose switching one-vs-the-rest loss (SOVR), which switches from cross-entropy to one-vs-the-rest loss for important samples that have small logit margins. We prove that one-vs-the-rest loss increases logit margins two times larger than the weighted cross-entropy loss for a simple problem. We experimentally confirm that SOVR increases logit margins of important samples unlike existing methods and achieves better robustness against Auto-Attack than importance-aware methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-kanai23a, title = {One-vs-the-Rest Loss to Focus on Important Samples in Adversarial Training}, author = {Kanai, Sekitoshi and Yamaguchi, Shin'Ya and Yamada, Masanori and Takahashi, Hiroshi and Ohno, Kentaro and Ida, Yasutoshi}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {15669--15695}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/kanai23a/kanai23a.pdf}, url = {https://proceedings.mlr.press/v202/kanai23a.html}, abstract = {This paper proposes a new loss function for adversarial training. Since adversarial training has difficulties, e.g., necessity of high model capacity, focusing on important data points by weighting cross-entropy loss has attracted much attention. However, they are vulnerable to sophisticated attacks, e.g., Auto-Attack. This paper experimentally reveals that the cause of their vulnerability is their small margins between logits for the true label and the other labels. Since neural networks classify the data points based on the logits, logit margins should be large enough to avoid flipping the largest logit by the attacks. Importance-aware methods do not increase logit margins of important samples but decrease those of less-important samples compared with cross-entropy loss. To increase logit margins of important samples, we propose switching one-vs-the-rest loss (SOVR), which switches from cross-entropy to one-vs-the-rest loss for important samples that have small logit margins. We prove that one-vs-the-rest loss increases logit margins two times larger than the weighted cross-entropy loss for a simple problem. We experimentally confirm that SOVR increases logit margins of important samples unlike existing methods and achieves better robustness against Auto-Attack than importance-aware methods.} }
Endnote
%0 Conference Paper %T One-vs-the-Rest Loss to Focus on Important Samples in Adversarial Training %A Sekitoshi Kanai %A Shin’Ya Yamaguchi %A Masanori Yamada %A Hiroshi Takahashi %A Kentaro Ohno %A Yasutoshi Ida %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-kanai23a %I PMLR %P 15669--15695 %U https://proceedings.mlr.press/v202/kanai23a.html %V 202 %X This paper proposes a new loss function for adversarial training. Since adversarial training has difficulties, e.g., necessity of high model capacity, focusing on important data points by weighting cross-entropy loss has attracted much attention. However, they are vulnerable to sophisticated attacks, e.g., Auto-Attack. This paper experimentally reveals that the cause of their vulnerability is their small margins between logits for the true label and the other labels. Since neural networks classify the data points based on the logits, logit margins should be large enough to avoid flipping the largest logit by the attacks. Importance-aware methods do not increase logit margins of important samples but decrease those of less-important samples compared with cross-entropy loss. To increase logit margins of important samples, we propose switching one-vs-the-rest loss (SOVR), which switches from cross-entropy to one-vs-the-rest loss for important samples that have small logit margins. We prove that one-vs-the-rest loss increases logit margins two times larger than the weighted cross-entropy loss for a simple problem. We experimentally confirm that SOVR increases logit margins of important samples unlike existing methods and achieves better robustness against Auto-Attack than importance-aware methods.
APA
Kanai, S., Yamaguchi, S., Yamada, M., Takahashi, H., Ohno, K. & Ida, Y.. (2023). One-vs-the-Rest Loss to Focus on Important Samples in Adversarial Training. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:15669-15695 Available from https://proceedings.mlr.press/v202/kanai23a.html.

Related Material