Provable Privacy with Non-Private Pre-Processing

Yaxi Hu, Amartya Sanyal, Bernhard Schölkopf
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:19402-19437, 2024.

Abstract

When analyzing Differentially Private (DP) machine learning pipelines, the potential privacy cost of data-dependent pre-processing is frequently overlooked in privacy accounting. In this work, we propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms. Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions: a variant of DP termed Smooth DP and the bounded sensitivity of the pre-processing algorithms. In addition to the generic framework, we provide explicit overall privacy guarantees for multiple data-dependent pre-processing algorithms, such as data imputation, quantization, deduplication, standard scaling and PCA, when used in combination with several DP algorithms. Notably, this framework is also simple to implement, allowing direct integration into existing DP pipelines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-hu24m, title = {Provable Privacy with Non-Private Pre-Processing}, author = {Hu, Yaxi and Sanyal, Amartya and Sch\"{o}lkopf, Bernhard}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {19402--19437}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/hu24m/hu24m.pdf}, url = {https://proceedings.mlr.press/v235/hu24m.html}, abstract = {When analyzing Differentially Private (DP) machine learning pipelines, the potential privacy cost of data-dependent pre-processing is frequently overlooked in privacy accounting. In this work, we propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms. Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions: a variant of DP termed Smooth DP and the bounded sensitivity of the pre-processing algorithms. In addition to the generic framework, we provide explicit overall privacy guarantees for multiple data-dependent pre-processing algorithms, such as data imputation, quantization, deduplication, standard scaling and PCA, when used in combination with several DP algorithms. Notably, this framework is also simple to implement, allowing direct integration into existing DP pipelines.} }
Endnote
%0 Conference Paper %T Provable Privacy with Non-Private Pre-Processing %A Yaxi Hu %A Amartya Sanyal %A Bernhard Schölkopf %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-hu24m %I PMLR %P 19402--19437 %U https://proceedings.mlr.press/v235/hu24m.html %V 235 %X When analyzing Differentially Private (DP) machine learning pipelines, the potential privacy cost of data-dependent pre-processing is frequently overlooked in privacy accounting. In this work, we propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms. Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions: a variant of DP termed Smooth DP and the bounded sensitivity of the pre-processing algorithms. In addition to the generic framework, we provide explicit overall privacy guarantees for multiple data-dependent pre-processing algorithms, such as data imputation, quantization, deduplication, standard scaling and PCA, when used in combination with several DP algorithms. Notably, this framework is also simple to implement, allowing direct integration into existing DP pipelines.
APA
Hu, Y., Sanyal, A. & Schölkopf, B.. (2024). Provable Privacy with Non-Private Pre-Processing. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:19402-19437 Available from https://proceedings.mlr.press/v235/hu24m.html.

Related Material