Learning-augmented private algorithms for multiple quantile release

Mikhail Khodak, Kareem Amin, Travis Dick, Sergei Vassilvitskii
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:16344-16376, 2023.

Abstract

When applying differential privacy to sensitive data, we can often improve performance using external information such as other sensitive data, public data, or human priors. We propose to use the learning-augmented algorithms (or algorithms with predictions) framework—previously applied largely to improve time complexity or competitive ratios—as a powerful way of designing and analyzing privacy-preserving methods that can take advantage of such external information to improve utility. This idea is instantiated on the important task of multiple quantile release, for which we derive error guarantees that scale with a natural measure of prediction quality while (almost) recovering state-of-the-art prediction-independent guarantees. Our analysis enjoys several advantages, including minimal assumptions about the data, a natural way of adding robustness, and the provision of useful surrogate losses for two novel ”meta” algorithms that learn predictions from other (potentially sensitive) data. We conclude with experiments on challenging tasks demonstrating that learning predictions across one or more instances can lead to large error reductions while preserving privacy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-khodak23a, title = {Learning-augmented private algorithms for multiple quantile release}, author = {Khodak, Mikhail and Amin, Kareem and Dick, Travis and Vassilvitskii, Sergei}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {16344--16376}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/khodak23a/khodak23a.pdf}, url = {https://proceedings.mlr.press/v202/khodak23a.html}, abstract = {When applying differential privacy to sensitive data, we can often improve performance using external information such as other sensitive data, public data, or human priors. We propose to use the learning-augmented algorithms (or algorithms with predictions) framework—previously applied largely to improve time complexity or competitive ratios—as a powerful way of designing and analyzing privacy-preserving methods that can take advantage of such external information to improve utility. This idea is instantiated on the important task of multiple quantile release, for which we derive error guarantees that scale with a natural measure of prediction quality while (almost) recovering state-of-the-art prediction-independent guarantees. Our analysis enjoys several advantages, including minimal assumptions about the data, a natural way of adding robustness, and the provision of useful surrogate losses for two novel ”meta” algorithms that learn predictions from other (potentially sensitive) data. We conclude with experiments on challenging tasks demonstrating that learning predictions across one or more instances can lead to large error reductions while preserving privacy.} }
Endnote
%0 Conference Paper %T Learning-augmented private algorithms for multiple quantile release %A Mikhail Khodak %A Kareem Amin %A Travis Dick %A Sergei Vassilvitskii %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-khodak23a %I PMLR %P 16344--16376 %U https://proceedings.mlr.press/v202/khodak23a.html %V 202 %X When applying differential privacy to sensitive data, we can often improve performance using external information such as other sensitive data, public data, or human priors. We propose to use the learning-augmented algorithms (or algorithms with predictions) framework—previously applied largely to improve time complexity or competitive ratios—as a powerful way of designing and analyzing privacy-preserving methods that can take advantage of such external information to improve utility. This idea is instantiated on the important task of multiple quantile release, for which we derive error guarantees that scale with a natural measure of prediction quality while (almost) recovering state-of-the-art prediction-independent guarantees. Our analysis enjoys several advantages, including minimal assumptions about the data, a natural way of adding robustness, and the provision of useful surrogate losses for two novel ”meta” algorithms that learn predictions from other (potentially sensitive) data. We conclude with experiments on challenging tasks demonstrating that learning predictions across one or more instances can lead to large error reductions while preserving privacy.
APA
Khodak, M., Amin, K., Dick, T. & Vassilvitskii, S.. (2023). Learning-augmented private algorithms for multiple quantile release. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:16344-16376 Available from https://proceedings.mlr.press/v202/khodak23a.html.

Related Material