Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy

Rachel Redberg, Yuqing Zhu, Yu-Xiang Wang
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:3977-4005, 2023.

Abstract

The “Propose-Test-Release” (PTR) framework [Dwork and Lei, 2009] is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i.e. those that add less noise when the input dataset is “nice”. We extend PTR to a more general setting by privately testing data-dependent privacy losses rather than local sensitivity, hence making it applicable beyond the standard noise-adding mechanisms, e.g. to queries with unbounded or undefined sensitivity. We demonstrate the versatility of generalized PTR using private linear regression as a case study. Additionally, we apply our algorithm to solve an open problem from “Private Aggregation of Teacher Ensembles (PATE)” [Papernot et al., 2017, 2018] - privately releasing the entire model with a delicate data-dependent analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-redberg23a, title = {Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy}, author = {Redberg, Rachel and Zhu, Yuqing and Wang, Yu-Xiang}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {3977--4005}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/redberg23a/redberg23a.pdf}, url = {https://proceedings.mlr.press/v206/redberg23a.html}, abstract = {The “Propose-Test-Release” (PTR) framework [Dwork and Lei, 2009] is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i.e. those that add less noise when the input dataset is “nice”. We extend PTR to a more general setting by privately testing data-dependent privacy losses rather than local sensitivity, hence making it applicable beyond the standard noise-adding mechanisms, e.g. to queries with unbounded or undefined sensitivity. We demonstrate the versatility of generalized PTR using private linear regression as a case study. Additionally, we apply our algorithm to solve an open problem from “Private Aggregation of Teacher Ensembles (PATE)” [Papernot et al., 2017, 2018] - privately releasing the entire model with a delicate data-dependent analysis.} }
Endnote
%0 Conference Paper %T Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy %A Rachel Redberg %A Yuqing Zhu %A Yu-Xiang Wang %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-redberg23a %I PMLR %P 3977--4005 %U https://proceedings.mlr.press/v206/redberg23a.html %V 206 %X The “Propose-Test-Release” (PTR) framework [Dwork and Lei, 2009] is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i.e. those that add less noise when the input dataset is “nice”. We extend PTR to a more general setting by privately testing data-dependent privacy losses rather than local sensitivity, hence making it applicable beyond the standard noise-adding mechanisms, e.g. to queries with unbounded or undefined sensitivity. We demonstrate the versatility of generalized PTR using private linear regression as a case study. Additionally, we apply our algorithm to solve an open problem from “Private Aggregation of Teacher Ensembles (PATE)” [Papernot et al., 2017, 2018] - privately releasing the entire model with a delicate data-dependent analysis.
APA
Redberg, R., Zhu, Y. & Wang, Y.. (2023). Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:3977-4005 Available from https://proceedings.mlr.press/v206/redberg23a.html.

Related Material