Building Robust Ensembles via Margin Boosting

Dinghuai Zhang, Hongyang Zhang, Aaron Courville, Yoshua Bengio, Pradeep Ravikumar, Arun Sai Suggala
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:26669-26692, 2022.

Abstract

In the context of adversarial robustness, a single model does not usually have enough power to defend against all possible adversarial attacks, and as a result, has sub-optimal robustness. Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks. In this work, we take a principled approach towards building robust ensembles. We view this problem from the perspective of margin-boosting and develop an algorithm for learning an ensemble with maximum margin. Through extensive empirical evaluation on benchmark datasets, we show that our algorithm not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion. An important byproduct of our work is a margin-maximizing cross-entropy (MCE) loss, which is a better alternative to the standard cross-entropy (CE) loss. Empirically, we show that replacing the CE loss in state-of-the-art adversarial training techniques with our MCE loss leads to significant performance improvement.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhang22aj, title = {Building Robust Ensembles via Margin Boosting}, author = {Zhang, Dinghuai and Zhang, Hongyang and Courville, Aaron and Bengio, Yoshua and Ravikumar, Pradeep and Suggala, Arun Sai}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {26669--26692}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhang22aj/zhang22aj.pdf}, url = {https://proceedings.mlr.press/v162/zhang22aj.html}, abstract = {In the context of adversarial robustness, a single model does not usually have enough power to defend against all possible adversarial attacks, and as a result, has sub-optimal robustness. Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks. In this work, we take a principled approach towards building robust ensembles. We view this problem from the perspective of margin-boosting and develop an algorithm for learning an ensemble with maximum margin. Through extensive empirical evaluation on benchmark datasets, we show that our algorithm not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion. An important byproduct of our work is a margin-maximizing cross-entropy (MCE) loss, which is a better alternative to the standard cross-entropy (CE) loss. Empirically, we show that replacing the CE loss in state-of-the-art adversarial training techniques with our MCE loss leads to significant performance improvement.} }
Endnote
%0 Conference Paper %T Building Robust Ensembles via Margin Boosting %A Dinghuai Zhang %A Hongyang Zhang %A Aaron Courville %A Yoshua Bengio %A Pradeep Ravikumar %A Arun Sai Suggala %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhang22aj %I PMLR %P 26669--26692 %U https://proceedings.mlr.press/v162/zhang22aj.html %V 162 %X In the context of adversarial robustness, a single model does not usually have enough power to defend against all possible adversarial attacks, and as a result, has sub-optimal robustness. Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks. In this work, we take a principled approach towards building robust ensembles. We view this problem from the perspective of margin-boosting and develop an algorithm for learning an ensemble with maximum margin. Through extensive empirical evaluation on benchmark datasets, we show that our algorithm not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion. An important byproduct of our work is a margin-maximizing cross-entropy (MCE) loss, which is a better alternative to the standard cross-entropy (CE) loss. Empirically, we show that replacing the CE loss in state-of-the-art adversarial training techniques with our MCE loss leads to significant performance improvement.
APA
Zhang, D., Zhang, H., Courville, A., Bengio, Y., Ravikumar, P. & Suggala, A.S.. (2022). Building Robust Ensembles via Margin Boosting. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:26669-26692 Available from https://proceedings.mlr.press/v162/zhang22aj.html.

Related Material