Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data

Gautam Kamath, Xingtu Liu, Huanyu Zhang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:10633-10660, 2022.

Abstract

We study stochastic convex optimization with heavy-tailed data under the constraint of differential privacy (DP). Most prior work on this problem is restricted to the case where the loss function is Lipschitz. Instead, as introduced by Wang, Xiao, Devadas, and Xu \cite{WangXDX20}, we study general convex loss functions with the assumption that the distribution of gradients has bounded $k$-th moments. We provide improved upper bounds on the excess population risk under concentrated DP for convex and strongly convex loss functions. Along the way, we derive new algorithms for private mean estimation of heavy-tailed distributions, under both pure and concentrated DP. Finally, we prove nearly-matching lower bounds for private stochastic convex optimization with strongly convex losses and mean estimation, showing new separations between pure and concentrated DP.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-kamath22a, title = {Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data}, author = {Kamath, Gautam and Liu, Xingtu and Zhang, Huanyu}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {10633--10660}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/kamath22a/kamath22a.pdf}, url = {https://proceedings.mlr.press/v162/kamath22a.html}, abstract = {We study stochastic convex optimization with heavy-tailed data under the constraint of differential privacy (DP). Most prior work on this problem is restricted to the case where the loss function is Lipschitz. Instead, as introduced by Wang, Xiao, Devadas, and Xu \cite{WangXDX20}, we study general convex loss functions with the assumption that the distribution of gradients has bounded $k$-th moments. We provide improved upper bounds on the excess population risk under concentrated DP for convex and strongly convex loss functions. Along the way, we derive new algorithms for private mean estimation of heavy-tailed distributions, under both pure and concentrated DP. Finally, we prove nearly-matching lower bounds for private stochastic convex optimization with strongly convex losses and mean estimation, showing new separations between pure and concentrated DP.} }
Endnote
%0 Conference Paper %T Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data %A Gautam Kamath %A Xingtu Liu %A Huanyu Zhang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-kamath22a %I PMLR %P 10633--10660 %U https://proceedings.mlr.press/v162/kamath22a.html %V 162 %X We study stochastic convex optimization with heavy-tailed data under the constraint of differential privacy (DP). Most prior work on this problem is restricted to the case where the loss function is Lipschitz. Instead, as introduced by Wang, Xiao, Devadas, and Xu \cite{WangXDX20}, we study general convex loss functions with the assumption that the distribution of gradients has bounded $k$-th moments. We provide improved upper bounds on the excess population risk under concentrated DP for convex and strongly convex loss functions. Along the way, we derive new algorithms for private mean estimation of heavy-tailed distributions, under both pure and concentrated DP. Finally, we prove nearly-matching lower bounds for private stochastic convex optimization with strongly convex losses and mean estimation, showing new separations between pure and concentrated DP.
APA
Kamath, G., Liu, X. & Zhang, H.. (2022). Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:10633-10660 Available from https://proceedings.mlr.press/v162/kamath22a.html.

Related Material