To be Robust or to be Fair: Towards Fairness in Adversarial Training

Han Xu, Xiaorui Liu, Yaxin Li, Anil Jain, Jiliang Tang
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11492-11501, 2021.

Abstract

Adversarial training algorithms have been proved to be reliable to improve machine learning models’ robustness against adversarial examples. However, we find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data. For instance, PGD adversarially trained ResNet18 model on CIFAR-10 has 93% clean accuracy and 67% PGD l_infty-8 adversarial accuracy on the class ”automobile” but only 65% and 17% on class ”cat”. This phenomenon happens in balanced datasets and does not exist in naturally trained models when only using clean samples. In this work, we empirically and theoretically show that this phenomenon can generally happen under adversarial training algorithms which minimize DNN models’ robust errors. Motivated by these findings, we propose a Fair-Robust-Learning (FRL) framework to mitigate this unfairness problem when doing adversarial defenses and experimental results validate the effectiveness of FRL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-xu21b, title = {To be Robust or to be Fair: Towards Fairness in Adversarial Training}, author = {Xu, Han and Liu, Xiaorui and Li, Yaxin and Jain, Anil and Tang, Jiliang}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11492--11501}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/xu21b/xu21b.pdf}, url = {https://proceedings.mlr.press/v139/xu21b.html}, abstract = {Adversarial training algorithms have been proved to be reliable to improve machine learning models’ robustness against adversarial examples. However, we find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data. For instance, PGD adversarially trained ResNet18 model on CIFAR-10 has 93% clean accuracy and 67% PGD l_infty-8 adversarial accuracy on the class ”automobile” but only 65% and 17% on class ”cat”. This phenomenon happens in balanced datasets and does not exist in naturally trained models when only using clean samples. In this work, we empirically and theoretically show that this phenomenon can generally happen under adversarial training algorithms which minimize DNN models’ robust errors. Motivated by these findings, we propose a Fair-Robust-Learning (FRL) framework to mitigate this unfairness problem when doing adversarial defenses and experimental results validate the effectiveness of FRL.} }
Endnote
%0 Conference Paper %T To be Robust or to be Fair: Towards Fairness in Adversarial Training %A Han Xu %A Xiaorui Liu %A Yaxin Li %A Anil Jain %A Jiliang Tang %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-xu21b %I PMLR %P 11492--11501 %U https://proceedings.mlr.press/v139/xu21b.html %V 139 %X Adversarial training algorithms have been proved to be reliable to improve machine learning models’ robustness against adversarial examples. However, we find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data. For instance, PGD adversarially trained ResNet18 model on CIFAR-10 has 93% clean accuracy and 67% PGD l_infty-8 adversarial accuracy on the class ”automobile” but only 65% and 17% on class ”cat”. This phenomenon happens in balanced datasets and does not exist in naturally trained models when only using clean samples. In this work, we empirically and theoretically show that this phenomenon can generally happen under adversarial training algorithms which minimize DNN models’ robust errors. Motivated by these findings, we propose a Fair-Robust-Learning (FRL) framework to mitigate this unfairness problem when doing adversarial defenses and experimental results validate the effectiveness of FRL.
APA
Xu, H., Liu, X., Li, Y., Jain, A. & Tang, J.. (2021). To be Robust or to be Fair: Towards Fairness in Adversarial Training. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11492-11501 Available from https://proceedings.mlr.press/v139/xu21b.html.

Related Material