Quadratic Upper Bound for Boosting Robustness

Euijin You, Hyang-Won Lee
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:72656-72676, 2025.

Abstract

Fast adversarial training (FAT) aims to enhance the robustness of models against adversarial attacks with reduced training time, however, FAT often suffers from compromised robustness due to insufficient exploration of adversarial space. In this paper, we develop a loss function to mitigate the problem of degraded robustness under FAT. Specifically, we derive a quadratic upper bound (QUB) on the adversarial training (AT) loss function and propose to utilize the bound with existing FAT methods. Our experimental results show that applying QUB loss to the existing methods yields significant improvement of robustness. Furthermore, using various metrics, we demonstrate that this improvement is likely to result from the smoothened loss landscape of the resulting model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-you25a, title = {Quadratic Upper Bound for Boosting Robustness}, author = {You, Euijin and Lee, Hyang-Won}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {72656--72676}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/you25a/you25a.pdf}, url = {https://proceedings.mlr.press/v267/you25a.html}, abstract = {Fast adversarial training (FAT) aims to enhance the robustness of models against adversarial attacks with reduced training time, however, FAT often suffers from compromised robustness due to insufficient exploration of adversarial space. In this paper, we develop a loss function to mitigate the problem of degraded robustness under FAT. Specifically, we derive a quadratic upper bound (QUB) on the adversarial training (AT) loss function and propose to utilize the bound with existing FAT methods. Our experimental results show that applying QUB loss to the existing methods yields significant improvement of robustness. Furthermore, using various metrics, we demonstrate that this improvement is likely to result from the smoothened loss landscape of the resulting model.} }
Endnote
%0 Conference Paper %T Quadratic Upper Bound for Boosting Robustness %A Euijin You %A Hyang-Won Lee %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-you25a %I PMLR %P 72656--72676 %U https://proceedings.mlr.press/v267/you25a.html %V 267 %X Fast adversarial training (FAT) aims to enhance the robustness of models against adversarial attacks with reduced training time, however, FAT often suffers from compromised robustness due to insufficient exploration of adversarial space. In this paper, we develop a loss function to mitigate the problem of degraded robustness under FAT. Specifically, we derive a quadratic upper bound (QUB) on the adversarial training (AT) loss function and propose to utilize the bound with existing FAT methods. Our experimental results show that applying QUB loss to the existing methods yields significant improvement of robustness. Furthermore, using various metrics, we demonstrate that this improvement is likely to result from the smoothened loss landscape of the resulting model.
APA
You, E. & Lee, H.. (2025). Quadratic Upper Bound for Boosting Robustness. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:72656-72676 Available from https://proceedings.mlr.press/v267/you25a.html.

Related Material