A Private and Computationally-Efficient Estimator for Unbounded Gaussians

Gautam Kamath, Argyris Mouzakis, Vikrant Singhal, Thomas Steinke, Jonathan Ullman
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:544-572, 2022.

Abstract

We give the first polynomial-time, polynomial-sample, differentially private estimator for the mean and covariance of an arbitrary Gaussian distribution N(μ,Σ) in \Rd. All previous estimators are either nonconstructive, with unbounded running time, or require the user to specify a priori bounds on the parameters μ and Σ. The primary new technical tool in our algorithm is a new differentially private preconditioner that takes samples from an arbitrary Gaussian N(0,Σ) and returns a matrix A such that AΣAT has constant condition number

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-kamath22a, title = {A Private and Computationally-Efficient Estimator for Unbounded Gaussians}, author = {Kamath, Gautam and Mouzakis, Argyris and Singhal, Vikrant and Steinke, Thomas and Ullman, Jonathan}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {544--572}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/kamath22a/kamath22a.pdf}, url = {https://proceedings.mlr.press/v178/kamath22a.html}, abstract = {We give the first polynomial-time, polynomial-sample, differentially private estimator for the mean and covariance of an arbitrary Gaussian distribution $N(\mu,\Sigma)$ in $\R^d$. All previous estimators are either nonconstructive, with unbounded running time, or require the user to specify a priori bounds on the parameters $\mu$ and $\Sigma$. The primary new technical tool in our algorithm is a new differentially private preconditioner that takes samples from an arbitrary Gaussian $N(0,\Sigma)$ and returns a matrix $A$ such that $A \Sigma A^T$ has constant condition number} }
Endnote
%0 Conference Paper %T A Private and Computationally-Efficient Estimator for Unbounded Gaussians %A Gautam Kamath %A Argyris Mouzakis %A Vikrant Singhal %A Thomas Steinke %A Jonathan Ullman %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-kamath22a %I PMLR %P 544--572 %U https://proceedings.mlr.press/v178/kamath22a.html %V 178 %X We give the first polynomial-time, polynomial-sample, differentially private estimator for the mean and covariance of an arbitrary Gaussian distribution $N(\mu,\Sigma)$ in $\R^d$. All previous estimators are either nonconstructive, with unbounded running time, or require the user to specify a priori bounds on the parameters $\mu$ and $\Sigma$. The primary new technical tool in our algorithm is a new differentially private preconditioner that takes samples from an arbitrary Gaussian $N(0,\Sigma)$ and returns a matrix $A$ such that $A \Sigma A^T$ has constant condition number
APA
Kamath, G., Mouzakis, A., Singhal, V., Steinke, T. & Ullman, J.. (2022). A Private and Computationally-Efficient Estimator for Unbounded Gaussians. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:544-572 Available from https://proceedings.mlr.press/v178/kamath22a.html.

Related Material