On User-Level Private Convex Optimization

Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Raghu Meka, Chiyuan Zhang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:11283-11299, 2023.

Abstract

We introduce a new mechanism for stochastic convex optimization (SCO) with user-level differential privacy guarantees. The convergence rates of this mechanism are similar to those in the prior work of Levy et al. 2021 and Narayanan et al. 2022, but with two important improvements. Our mechanism does not require any smoothness assumptions on the loss. Furthermore, our bounds are also the first where the minimum number of users needed for user-level privacy has no dependence on the dimension and only a logarithmic dependence on the desired excess error. The main idea underlying the new mechanism is to show that the optimizers of strongly convex losses have low local deletion sensitivity, along with a new output perturbation method for functions with low local deletion sensitivity, which could be of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-ghazi23a, title = {On User-Level Private Convex Optimization}, author = {Ghazi, Badih and Kamath, Pritish and Kumar, Ravi and Manurangsi, Pasin and Meka, Raghu and Zhang, Chiyuan}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {11283--11299}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/ghazi23a/ghazi23a.pdf}, url = {https://proceedings.mlr.press/v202/ghazi23a.html}, abstract = {We introduce a new mechanism for stochastic convex optimization (SCO) with user-level differential privacy guarantees. The convergence rates of this mechanism are similar to those in the prior work of Levy et al. 2021 and Narayanan et al. 2022, but with two important improvements. Our mechanism does not require any smoothness assumptions on the loss. Furthermore, our bounds are also the first where the minimum number of users needed for user-level privacy has no dependence on the dimension and only a logarithmic dependence on the desired excess error. The main idea underlying the new mechanism is to show that the optimizers of strongly convex losses have low local deletion sensitivity, along with a new output perturbation method for functions with low local deletion sensitivity, which could be of independent interest.} }
Endnote
%0 Conference Paper %T On User-Level Private Convex Optimization %A Badih Ghazi %A Pritish Kamath %A Ravi Kumar %A Pasin Manurangsi %A Raghu Meka %A Chiyuan Zhang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-ghazi23a %I PMLR %P 11283--11299 %U https://proceedings.mlr.press/v202/ghazi23a.html %V 202 %X We introduce a new mechanism for stochastic convex optimization (SCO) with user-level differential privacy guarantees. The convergence rates of this mechanism are similar to those in the prior work of Levy et al. 2021 and Narayanan et al. 2022, but with two important improvements. Our mechanism does not require any smoothness assumptions on the loss. Furthermore, our bounds are also the first where the minimum number of users needed for user-level privacy has no dependence on the dimension and only a logarithmic dependence on the desired excess error. The main idea underlying the new mechanism is to show that the optimizers of strongly convex losses have low local deletion sensitivity, along with a new output perturbation method for functions with low local deletion sensitivity, which could be of independent interest.
APA
Ghazi, B., Kamath, P., Kumar, R., Manurangsi, P., Meka, R. & Zhang, C.. (2023). On User-Level Private Convex Optimization. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:11283-11299 Available from https://proceedings.mlr.press/v202/ghazi23a.html.

Related Material