Improving l1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints

Vaclav Voracek, Matthias Hein
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:35198-35222, 2023.

Abstract

Randomized smoothing is a popular method to certify robustness of image classifiers to adversarial input perturbations. It is the only certification technique which scales directly to datasets of higher dimension such as ImageNet. However, current techniques are not able to utilize the fact that any adversarial example has to lie in the image space, that is $[0,1]^d$; otherwise, one can trivially detect it. To address this suboptimality, we derive new certification formulae which lead to significant improvements in the certified $\ell_1$-robustness without the need of adapting the classifiers or change of smoothing distributions. The code is released at https://github.com/vvoracek/L1-smoothing

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-voracek23a, title = {Improving l1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints}, author = {Voracek, Vaclav and Hein, Matthias}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {35198--35222}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/voracek23a/voracek23a.pdf}, url = {https://proceedings.mlr.press/v202/voracek23a.html}, abstract = {Randomized smoothing is a popular method to certify robustness of image classifiers to adversarial input perturbations. It is the only certification technique which scales directly to datasets of higher dimension such as ImageNet. However, current techniques are not able to utilize the fact that any adversarial example has to lie in the image space, that is $[0,1]^d$; otherwise, one can trivially detect it. To address this suboptimality, we derive new certification formulae which lead to significant improvements in the certified $\ell_1$-robustness without the need of adapting the classifiers or change of smoothing distributions. The code is released at https://github.com/vvoracek/L1-smoothing} }
Endnote
%0 Conference Paper %T Improving l1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints %A Vaclav Voracek %A Matthias Hein %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-voracek23a %I PMLR %P 35198--35222 %U https://proceedings.mlr.press/v202/voracek23a.html %V 202 %X Randomized smoothing is a popular method to certify robustness of image classifiers to adversarial input perturbations. It is the only certification technique which scales directly to datasets of higher dimension such as ImageNet. However, current techniques are not able to utilize the fact that any adversarial example has to lie in the image space, that is $[0,1]^d$; otherwise, one can trivially detect it. To address this suboptimality, we derive new certification formulae which lead to significant improvements in the certified $\ell_1$-robustness without the need of adapting the classifiers or change of smoothing distributions. The code is released at https://github.com/vvoracek/L1-smoothing
APA
Voracek, V. & Hein, M.. (2023). Improving l1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:35198-35222 Available from https://proceedings.mlr.press/v202/voracek23a.html.

Related Material