Certification for Differentially Private Prediction in Gradient-Based Training

Matthew Robert Wicker, Philip Sosnin, Igor Shilov, Adrianna Janik, Mark Niklas Mueller, Yves-Alexandre De Montjoye, Adrian Weller, Calvin Tsay
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:66726-66745, 2025.

Abstract

We study private prediction where differential privacy is achieved by adding noise to the outputs of a non-private model. Existing methods rely on noise proportional to the global sensitivity of the model, often resulting in sub-optimal privacy-utility trade-offs compared to private training. We introduce a novel approach for computing dataset-specific upper bounds on prediction sensitivity by leveraging convex relaxation and bound propagation techniques. By combining these bounds with the smooth sensitivity mechanism, we significantly improve the privacy analysis of private prediction compared to global sensitivity-based approaches. Experimental results across real-world datasets in medical image classification and natural language processing demonstrate that our sensitivity bounds are can be orders of magnitude tighter than global sensitivity. Our approach provides a strong basis for the development of novel privacy preserving technologies.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wicker25a, title = {Certification for Differentially Private Prediction in Gradient-Based Training}, author = {Wicker, Matthew Robert and Sosnin, Philip and Shilov, Igor and Janik, Adrianna and Mueller, Mark Niklas and Montjoye, Yves-Alexandre De and Weller, Adrian and Tsay, Calvin}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {66726--66745}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wicker25a/wicker25a.pdf}, url = {https://proceedings.mlr.press/v267/wicker25a.html}, abstract = {We study private prediction where differential privacy is achieved by adding noise to the outputs of a non-private model. Existing methods rely on noise proportional to the global sensitivity of the model, often resulting in sub-optimal privacy-utility trade-offs compared to private training. We introduce a novel approach for computing dataset-specific upper bounds on prediction sensitivity by leveraging convex relaxation and bound propagation techniques. By combining these bounds with the smooth sensitivity mechanism, we significantly improve the privacy analysis of private prediction compared to global sensitivity-based approaches. Experimental results across real-world datasets in medical image classification and natural language processing demonstrate that our sensitivity bounds are can be orders of magnitude tighter than global sensitivity. Our approach provides a strong basis for the development of novel privacy preserving technologies.} }
Endnote
%0 Conference Paper %T Certification for Differentially Private Prediction in Gradient-Based Training %A Matthew Robert Wicker %A Philip Sosnin %A Igor Shilov %A Adrianna Janik %A Mark Niklas Mueller %A Yves-Alexandre De Montjoye %A Adrian Weller %A Calvin Tsay %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wicker25a %I PMLR %P 66726--66745 %U https://proceedings.mlr.press/v267/wicker25a.html %V 267 %X We study private prediction where differential privacy is achieved by adding noise to the outputs of a non-private model. Existing methods rely on noise proportional to the global sensitivity of the model, often resulting in sub-optimal privacy-utility trade-offs compared to private training. We introduce a novel approach for computing dataset-specific upper bounds on prediction sensitivity by leveraging convex relaxation and bound propagation techniques. By combining these bounds with the smooth sensitivity mechanism, we significantly improve the privacy analysis of private prediction compared to global sensitivity-based approaches. Experimental results across real-world datasets in medical image classification and natural language processing demonstrate that our sensitivity bounds are can be orders of magnitude tighter than global sensitivity. Our approach provides a strong basis for the development of novel privacy preserving technologies.
APA
Wicker, M.R., Sosnin, P., Shilov, I., Janik, A., Mueller, M.N., Montjoye, Y.D., Weller, A. & Tsay, C.. (2025). Certification for Differentially Private Prediction in Gradient-Based Training. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:66726-66745 Available from https://proceedings.mlr.press/v267/wicker25a.html.

Related Material