Provably Cost-Sensitive Adversarial Defense via Randomized Smoothing

Yuan Xin, Dingfan Chen, Michael Backes, Xiao Zhang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:68825-68846, 2025.

Abstract

As machine learning models are deployed in critical applications, robustness against adversarial perturbations is crucial. While numerous defensive algorithms have been proposed to counter such attacks, they typically assume that all adversarial transformations are equally important, an assumption that rarely aligns with real-world applications. To address this, we study the problem of robust learning against adversarial perturbations under cost-sensitive scenarios, where the potential harm of different types of misclassifications is encoded in a cost matrix. Our solution introduces a provably robust learning algorithm to certify and optimize for cost-sensitive robustness, building on the scalable certification framework of randomized smoothing. Specifically, we formalize the definition of cost-sensitive certified radius and propose our novel adaptation of the standard certification algorithm to generate tight robustness certificates tailored to any cost matrix. In addition, we design a robust training method that improves certified cost-sensitive robustness without compromising model accuracy. Extensive experiments on benchmark datasets, including challenging ones unsolvable by existing methods, demonstrate the effectiveness of our certification algorithm and training method across various cost-sensitive scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-xin25a, title = {Provably Cost-Sensitive Adversarial Defense via Randomized Smoothing}, author = {Xin, Yuan and Chen, Dingfan and Backes, Michael and Zhang, Xiao}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {68825--68846}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/xin25a/xin25a.pdf}, url = {https://proceedings.mlr.press/v267/xin25a.html}, abstract = {As machine learning models are deployed in critical applications, robustness against adversarial perturbations is crucial. While numerous defensive algorithms have been proposed to counter such attacks, they typically assume that all adversarial transformations are equally important, an assumption that rarely aligns with real-world applications. To address this, we study the problem of robust learning against adversarial perturbations under cost-sensitive scenarios, where the potential harm of different types of misclassifications is encoded in a cost matrix. Our solution introduces a provably robust learning algorithm to certify and optimize for cost-sensitive robustness, building on the scalable certification framework of randomized smoothing. Specifically, we formalize the definition of cost-sensitive certified radius and propose our novel adaptation of the standard certification algorithm to generate tight robustness certificates tailored to any cost matrix. In addition, we design a robust training method that improves certified cost-sensitive robustness without compromising model accuracy. Extensive experiments on benchmark datasets, including challenging ones unsolvable by existing methods, demonstrate the effectiveness of our certification algorithm and training method across various cost-sensitive scenarios.} }
Endnote
%0 Conference Paper %T Provably Cost-Sensitive Adversarial Defense via Randomized Smoothing %A Yuan Xin %A Dingfan Chen %A Michael Backes %A Xiao Zhang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-xin25a %I PMLR %P 68825--68846 %U https://proceedings.mlr.press/v267/xin25a.html %V 267 %X As machine learning models are deployed in critical applications, robustness against adversarial perturbations is crucial. While numerous defensive algorithms have been proposed to counter such attacks, they typically assume that all adversarial transformations are equally important, an assumption that rarely aligns with real-world applications. To address this, we study the problem of robust learning against adversarial perturbations under cost-sensitive scenarios, where the potential harm of different types of misclassifications is encoded in a cost matrix. Our solution introduces a provably robust learning algorithm to certify and optimize for cost-sensitive robustness, building on the scalable certification framework of randomized smoothing. Specifically, we formalize the definition of cost-sensitive certified radius and propose our novel adaptation of the standard certification algorithm to generate tight robustness certificates tailored to any cost matrix. In addition, we design a robust training method that improves certified cost-sensitive robustness without compromising model accuracy. Extensive experiments on benchmark datasets, including challenging ones unsolvable by existing methods, demonstrate the effectiveness of our certification algorithm and training method across various cost-sensitive scenarios.
APA
Xin, Y., Chen, D., Backes, M. & Zhang, X.. (2025). Provably Cost-Sensitive Adversarial Defense via Randomized Smoothing. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:68825-68846 Available from https://proceedings.mlr.press/v267/xin25a.html.

Related Material