GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing

Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jian Song, Jun Zhu
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:8465-8483, 2022.

Abstract

Certified defenses such as randomized smoothing have shown promise towards building reliable machine learning systems against $\ell_p$ norm bounded attacks. However, existing methods are insufficient or unable to provably defend against semantic transformations, especially those without closed-form expressions (such as defocus blur and pixelate), which are more common in practice and often unrestricted. To fill up this gap, we propose generalized randomized smoothing (GSmooth), a unified theoretical framework for certifying robustness against general semantic transformations via a novel dimension augmentation strategy. Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation. The surrogate model provides a powerful tool for studying the properties of semantic transformations and certifying robustness. Experimental results on several datasets demonstrate the effectiveness of our approach for robustness certification against multiple kinds of semantic transformations and corruptions, which is not achievable by the alternative baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-hao22c, title = {{GS}mooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing}, author = {Hao, Zhongkai and Ying, Chengyang and Dong, Yinpeng and Su, Hang and Song, Jian and Zhu, Jun}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {8465--8483}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/hao22c/hao22c.pdf}, url = {https://proceedings.mlr.press/v162/hao22c.html}, abstract = {Certified defenses such as randomized smoothing have shown promise towards building reliable machine learning systems against $\ell_p$ norm bounded attacks. However, existing methods are insufficient or unable to provably defend against semantic transformations, especially those without closed-form expressions (such as defocus blur and pixelate), which are more common in practice and often unrestricted. To fill up this gap, we propose generalized randomized smoothing (GSmooth), a unified theoretical framework for certifying robustness against general semantic transformations via a novel dimension augmentation strategy. Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation. The surrogate model provides a powerful tool for studying the properties of semantic transformations and certifying robustness. Experimental results on several datasets demonstrate the effectiveness of our approach for robustness certification against multiple kinds of semantic transformations and corruptions, which is not achievable by the alternative baselines.} }
Endnote
%0 Conference Paper %T GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing %A Zhongkai Hao %A Chengyang Ying %A Yinpeng Dong %A Hang Su %A Jian Song %A Jun Zhu %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-hao22c %I PMLR %P 8465--8483 %U https://proceedings.mlr.press/v162/hao22c.html %V 162 %X Certified defenses such as randomized smoothing have shown promise towards building reliable machine learning systems against $\ell_p$ norm bounded attacks. However, existing methods are insufficient or unable to provably defend against semantic transformations, especially those without closed-form expressions (such as defocus blur and pixelate), which are more common in practice and often unrestricted. To fill up this gap, we propose generalized randomized smoothing (GSmooth), a unified theoretical framework for certifying robustness against general semantic transformations via a novel dimension augmentation strategy. Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation. The surrogate model provides a powerful tool for studying the properties of semantic transformations and certifying robustness. Experimental results on several datasets demonstrate the effectiveness of our approach for robustness certification against multiple kinds of semantic transformations and corruptions, which is not achievable by the alternative baselines.
APA
Hao, Z., Ying, C., Dong, Y., Su, H., Song, J. & Zhu, J.. (2022). GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:8465-8483 Available from https://proceedings.mlr.press/v162/hao22c.html.

Related Material