A Pretty Fast Algorithm for Adaptive Private Mean Estimation

Rohith Kuditipudi, John Duchi, Saminul Haque
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:2511-2551, 2023.

Abstract

We design an $(\varepsilon, \delta)$-differentially private algorithm to estimate the mean of a $d$-variate distribution, with unknown covariance $\Sigma$, that is adaptive to $\Sigma$. To within polylogarithmic factors, the estimator achieves optimal rates of convergence with respect to the induced Mahalanobis norm $||\cdot||_\Sigma$, takes time $\tilde{O}(n d^2)$ to compute, has near linear sample complexity for sub-Gaussian distributions, allows $\Sigma$ to be degenerate or low rank, and adaptively extends beyond sub-Gaussianity. Prior to this work, other methods required exponential computation time or the superlinear scaling $n = \Omega(d^{3/2})$ to achieve non-trivial error with respect to the norm $||\cdot||_\Sigma$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-kuditipudi23a, title = {A Pretty Fast Algorithm for Adaptive Private Mean Estimation}, author = {Kuditipudi, Rohith and Duchi, John and Haque, Saminul}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {2511--2551}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/kuditipudi23a/kuditipudi23a.pdf}, url = {https://proceedings.mlr.press/v195/kuditipudi23a.html}, abstract = {We design an $(\varepsilon, \delta)$-differentially private algorithm to estimate the mean of a $d$-variate distribution, with unknown covariance $\Sigma$, that is adaptive to $\Sigma$. To within polylogarithmic factors, the estimator achieves optimal rates of convergence with respect to the induced Mahalanobis norm $||\cdot||_\Sigma$, takes time $\tilde{O}(n d^2)$ to compute, has near linear sample complexity for sub-Gaussian distributions, allows $\Sigma$ to be degenerate or low rank, and adaptively extends beyond sub-Gaussianity. Prior to this work, other methods required exponential computation time or the superlinear scaling $n = \Omega(d^{3/2})$ to achieve non-trivial error with respect to the norm $||\cdot||_\Sigma$.} }
Endnote
%0 Conference Paper %T A Pretty Fast Algorithm for Adaptive Private Mean Estimation %A Rohith Kuditipudi %A John Duchi %A Saminul Haque %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-kuditipudi23a %I PMLR %P 2511--2551 %U https://proceedings.mlr.press/v195/kuditipudi23a.html %V 195 %X We design an $(\varepsilon, \delta)$-differentially private algorithm to estimate the mean of a $d$-variate distribution, with unknown covariance $\Sigma$, that is adaptive to $\Sigma$. To within polylogarithmic factors, the estimator achieves optimal rates of convergence with respect to the induced Mahalanobis norm $||\cdot||_\Sigma$, takes time $\tilde{O}(n d^2)$ to compute, has near linear sample complexity for sub-Gaussian distributions, allows $\Sigma$ to be degenerate or low rank, and adaptively extends beyond sub-Gaussianity. Prior to this work, other methods required exponential computation time or the superlinear scaling $n = \Omega(d^{3/2})$ to achieve non-trivial error with respect to the norm $||\cdot||_\Sigma$.
APA
Kuditipudi, R., Duchi, J. & Haque, S.. (2023). A Pretty Fast Algorithm for Adaptive Private Mean Estimation. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:2511-2551 Available from https://proceedings.mlr.press/v195/kuditipudi23a.html.

Related Material