Position: Certified Robustness Does Not (Yet) Imply Model Security

Andrew Craig Cullen, Paul Montague, Sarah Monazam Erfani, Benjamin I. P. Rubinstein
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:81185-81198, 2025.

Abstract

While certified robustness is widely promoted as a solution to adversarial examples in Artificial Intelligence systems, significant challenges remain before these techniques can be meaningfully deployed in real-world applications. We identify critical gaps in current research, including the paradox of detection without distinction, the lack of clear criteria for practitioners to evaluate certification schemes, and the potential security risks arising from users’ expectations surrounding “guaranteed" robustness claims. These create an alignment issue between how certifications are presented and perceived, relative to their actual capabilities. This position paper is a call to arms for the certification research community, proposing concrete steps to address these fundamental challenges and advance the field toward practical applicability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-cullen25a, title = {Position: Certified Robustness Does Not ({Y}et) Imply Model Security}, author = {Cullen, Andrew Craig and Montague, Paul and Erfani, Sarah Monazam and Rubinstein, Benjamin I. P.}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {81185--81198}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/cullen25a/cullen25a.pdf}, url = {https://proceedings.mlr.press/v267/cullen25a.html}, abstract = {While certified robustness is widely promoted as a solution to adversarial examples in Artificial Intelligence systems, significant challenges remain before these techniques can be meaningfully deployed in real-world applications. We identify critical gaps in current research, including the paradox of detection without distinction, the lack of clear criteria for practitioners to evaluate certification schemes, and the potential security risks arising from users’ expectations surrounding “guaranteed" robustness claims. These create an alignment issue between how certifications are presented and perceived, relative to their actual capabilities. This position paper is a call to arms for the certification research community, proposing concrete steps to address these fundamental challenges and advance the field toward practical applicability.} }
Endnote
%0 Conference Paper %T Position: Certified Robustness Does Not (Yet) Imply Model Security %A Andrew Craig Cullen %A Paul Montague %A Sarah Monazam Erfani %A Benjamin I. P. Rubinstein %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-cullen25a %I PMLR %P 81185--81198 %U https://proceedings.mlr.press/v267/cullen25a.html %V 267 %X While certified robustness is widely promoted as a solution to adversarial examples in Artificial Intelligence systems, significant challenges remain before these techniques can be meaningfully deployed in real-world applications. We identify critical gaps in current research, including the paradox of detection without distinction, the lack of clear criteria for practitioners to evaluate certification schemes, and the potential security risks arising from users’ expectations surrounding “guaranteed" robustness claims. These create an alignment issue between how certifications are presented and perceived, relative to their actual capabilities. This position paper is a call to arms for the certification research community, proposing concrete steps to address these fundamental challenges and advance the field toward practical applicability.
APA
Cullen, A.C., Montague, P., Erfani, S.M. & Rubinstein, B.I.P.. (2025). Position: Certified Robustness Does Not (Yet) Imply Model Security. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:81185-81198 Available from https://proceedings.mlr.press/v267/cullen25a.html.

Related Material