Hidden Cost of Randomized Smoothing

Jeet Mohapatra, Ching-Yun Ko, Lily Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:4033-4041, 2021.

Abstract

The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public. While immense interests were in either crafting adversarial attacks as a way to measure the robustness of neural networks or devising worst-case analytical robustness verification with guarantees, few methods could enjoy both scalability and robustness guarantees at the same time. As an alternative to these attempts, randomized smoothing adopts a different prediction rule that enables statistical robustness arguments which easily scale to large networks. However, in this paper, we point out the side effects of current randomized smoothing workflows. Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-mohapatra21a, title = { Hidden Cost of Randomized Smoothing }, author = {Mohapatra, Jeet and Ko, Ching-Yun and Weng, Lily and Chen, Pin-Yu and Liu, Sijia and Daniel, Luca}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {4033--4041}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/mohapatra21a/mohapatra21a.pdf}, url = {https://proceedings.mlr.press/v130/mohapatra21a.html}, abstract = { The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public. While immense interests were in either crafting adversarial attacks as a way to measure the robustness of neural networks or devising worst-case analytical robustness verification with guarantees, few methods could enjoy both scalability and robustness guarantees at the same time. As an alternative to these attempts, randomized smoothing adopts a different prediction rule that enables statistical robustness arguments which easily scale to large networks. However, in this paper, we point out the side effects of current randomized smoothing workflows. Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives. } }
Endnote
%0 Conference Paper %T Hidden Cost of Randomized Smoothing %A Jeet Mohapatra %A Ching-Yun Ko %A Lily Weng %A Pin-Yu Chen %A Sijia Liu %A Luca Daniel %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-mohapatra21a %I PMLR %P 4033--4041 %U https://proceedings.mlr.press/v130/mohapatra21a.html %V 130 %X The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public. While immense interests were in either crafting adversarial attacks as a way to measure the robustness of neural networks or devising worst-case analytical robustness verification with guarantees, few methods could enjoy both scalability and robustness guarantees at the same time. As an alternative to these attempts, randomized smoothing adopts a different prediction rule that enables statistical robustness arguments which easily scale to large networks. However, in this paper, we point out the side effects of current randomized smoothing workflows. Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
APA
Mohapatra, J., Ko, C., Weng, L., Chen, P., Liu, S. & Daniel, L.. (2021). Hidden Cost of Randomized Smoothing . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:4033-4041 Available from https://proceedings.mlr.press/v130/mohapatra21a.html.

Related Material