Bridging the Theoretical Gap in Randomized Smoothing

Blaise Delattre, Paul Caillon, Quentin Barthélemy, Erwan Fagnou, Alexandre Allauzen
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3997-4005, 2025.

Abstract

Randomized smoothing has become a leading approach for certifying adversarial robustness in machine learning models. However, a persistent gap remains between theoretical certified robustness and empirical robustness accuracy. This paper introduces a new framework that bridges this gap by leveraging Lipschitz continuity for certification and proposing a novel, less conservative method for computing confidence intervals in randomized smoothing. Our approach tightens the bounds of certified robustness, offering a more accurate reflection of model robustness in practice. Through rigorous experimentation we show that our method improves the robust accuracy, compressing the gap between empirical findings and previous theoretical results. We argue that investigating local Lipschitz constants and designing ad-hoc confidence intervals can further enhance the performance of randomized smoothing. These results pave the way for a deeper understanding of the relationship between Lipschitz continuity and certified robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-delattre25a, title = {Bridging the Theoretical Gap in Randomized Smoothing}, author = {Delattre, Blaise and Caillon, Paul and Barth{\'e}lemy, Quentin and Fagnou, Erwan and Allauzen, Alexandre}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3997--4005}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/delattre25a/delattre25a.pdf}, url = {https://proceedings.mlr.press/v258/delattre25a.html}, abstract = {Randomized smoothing has become a leading approach for certifying adversarial robustness in machine learning models. However, a persistent gap remains between theoretical certified robustness and empirical robustness accuracy. This paper introduces a new framework that bridges this gap by leveraging Lipschitz continuity for certification and proposing a novel, less conservative method for computing confidence intervals in randomized smoothing. Our approach tightens the bounds of certified robustness, offering a more accurate reflection of model robustness in practice. Through rigorous experimentation we show that our method improves the robust accuracy, compressing the gap between empirical findings and previous theoretical results. We argue that investigating local Lipschitz constants and designing ad-hoc confidence intervals can further enhance the performance of randomized smoothing. These results pave the way for a deeper understanding of the relationship between Lipschitz continuity and certified robustness.} }
Endnote
%0 Conference Paper %T Bridging the Theoretical Gap in Randomized Smoothing %A Blaise Delattre %A Paul Caillon %A Quentin Barthélemy %A Erwan Fagnou %A Alexandre Allauzen %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-delattre25a %I PMLR %P 3997--4005 %U https://proceedings.mlr.press/v258/delattre25a.html %V 258 %X Randomized smoothing has become a leading approach for certifying adversarial robustness in machine learning models. However, a persistent gap remains between theoretical certified robustness and empirical robustness accuracy. This paper introduces a new framework that bridges this gap by leveraging Lipschitz continuity for certification and proposing a novel, less conservative method for computing confidence intervals in randomized smoothing. Our approach tightens the bounds of certified robustness, offering a more accurate reflection of model robustness in practice. Through rigorous experimentation we show that our method improves the robust accuracy, compressing the gap between empirical findings and previous theoretical results. We argue that investigating local Lipschitz constants and designing ad-hoc confidence intervals can further enhance the performance of randomized smoothing. These results pave the way for a deeper understanding of the relationship between Lipschitz continuity and certified robustness.
APA
Delattre, B., Caillon, P., Barthélemy, Q., Fagnou, E. & Allauzen, A.. (2025). Bridging the Theoretical Gap in Randomized Smoothing. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3997-4005 Available from https://proceedings.mlr.press/v258/delattre25a.html.

Related Material