(Nearly) Dimension Independent Private ERM with AdaGrad Rates\{via Publicly Estimated Subspaces

Peter Kairouz, Monica Ribero Diaz, Keith Rush, Abhradeep Thakurta
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:2717-2746, 2021.

Abstract

We revisit the problem of empirical risk minimziation (ERM) with differential privacy. We show that noisy AdaGrad, given appropriate knowledge and conditions on the subspace from which gradients can be drawn, achieves a regret comparable to traditional AdaGrad plus a well-controlled term due to noise. We show a convergence rate of $O(\tr(G_T)/T)$, where $G_T$ captures the geometry of the gradient subspace. Since $\tr(G_T)=O(\sqrt{T})$ we can obtain faster rates for convex and Lipschitz functions, compared to the $O(1/\sqrt{T})$ rate achieved by known versions of noisy (stochastic) gradient descent with comparable noise variance. In particular, we show that if the gradients lie in a known constant rank subspace, and assuming algorithmic access to an envelope which bounds decaying sensitivity, one can achieve faster convergence to an excess empirical risk of $\tilde O(1/\epsilon n)$, where $\epsilon$ is the privacy budget and $n$ the number of samples. Letting $p$ be the problem dimension, this result implies that, by running noisy Adagrad, we can bypass the DP-SGD bound $\tilde O(\sqrt{p}/\epsilon n)$ in $T=(\epsilon n)^{2/(1+2\alpha)}$ iterations, where $\alpha \geq 0$ is a parameter controlling gradient norm decay, instead of the rate achieved by SGD of $T=\epsilon^2n^2$. Our results operate with general convex functions in both constrained and unconstrained minimization. Along the way, we do a perturbation analysis of noisy AdaGrad, which is of independent interest. Our utility guarantee for the private ERM problem follows as a corollary to the regret guarantee of noisy AdaGrad.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-kairouz21a, title = {(Nearly) Dimension Independent Private ERM with AdaGrad Rates\\{via} Publicly Estimated Subspaces}, author = {Kairouz, Peter and Diaz, Monica Ribero and Rush, Keith and Thakurta, Abhradeep}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {2717--2746}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/kairouz21a/kairouz21a.pdf}, url = {https://proceedings.mlr.press/v134/kairouz21a.html}, abstract = {We revisit the problem of empirical risk minimziation (ERM) with differential privacy. We show that noisy AdaGrad, given appropriate knowledge and conditions on the subspace from which gradients can be drawn, achieves a regret comparable to traditional AdaGrad plus a well-controlled term due to noise. We show a convergence rate of $O(\tr(G_T)/T)$, where $G_T$ captures the geometry of the gradient subspace. Since $\tr(G_T)=O(\sqrt{T})$ we can obtain faster rates for convex and Lipschitz functions, compared to the $O(1/\sqrt{T})$ rate achieved by known versions of noisy (stochastic) gradient descent with comparable noise variance. In particular, we show that if the gradients lie in a known constant rank subspace, and assuming algorithmic access to an envelope which bounds decaying sensitivity, one can achieve faster convergence to an excess empirical risk of $\tilde O(1/\epsilon n)$, where $\epsilon$ is the privacy budget and $n$ the number of samples. Letting $p$ be the problem dimension, this result implies that, by running noisy Adagrad, we can bypass the DP-SGD bound $\tilde O(\sqrt{p}/\epsilon n)$ in $T=(\epsilon n)^{2/(1+2\alpha)}$ iterations, where $\alpha \geq 0$ is a parameter controlling gradient norm decay, instead of the rate achieved by SGD of $T=\epsilon^2n^2$. Our results operate with general convex functions in both constrained and unconstrained minimization. Along the way, we do a perturbation analysis of noisy AdaGrad, which is of independent interest. Our utility guarantee for the private ERM problem follows as a corollary to the regret guarantee of noisy AdaGrad.} }
Endnote
%0 Conference Paper %T (Nearly) Dimension Independent Private ERM with AdaGrad Rates\{via Publicly Estimated Subspaces %A Peter Kairouz %A Monica Ribero Diaz %A Keith Rush %A Abhradeep Thakurta %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-kairouz21a %I PMLR %P 2717--2746 %U https://proceedings.mlr.press/v134/kairouz21a.html %V 134 %X We revisit the problem of empirical risk minimziation (ERM) with differential privacy. We show that noisy AdaGrad, given appropriate knowledge and conditions on the subspace from which gradients can be drawn, achieves a regret comparable to traditional AdaGrad plus a well-controlled term due to noise. We show a convergence rate of $O(\tr(G_T)/T)$, where $G_T$ captures the geometry of the gradient subspace. Since $\tr(G_T)=O(\sqrt{T})$ we can obtain faster rates for convex and Lipschitz functions, compared to the $O(1/\sqrt{T})$ rate achieved by known versions of noisy (stochastic) gradient descent with comparable noise variance. In particular, we show that if the gradients lie in a known constant rank subspace, and assuming algorithmic access to an envelope which bounds decaying sensitivity, one can achieve faster convergence to an excess empirical risk of $\tilde O(1/\epsilon n)$, where $\epsilon$ is the privacy budget and $n$ the number of samples. Letting $p$ be the problem dimension, this result implies that, by running noisy Adagrad, we can bypass the DP-SGD bound $\tilde O(\sqrt{p}/\epsilon n)$ in $T=(\epsilon n)^{2/(1+2\alpha)}$ iterations, where $\alpha \geq 0$ is a parameter controlling gradient norm decay, instead of the rate achieved by SGD of $T=\epsilon^2n^2$. Our results operate with general convex functions in both constrained and unconstrained minimization. Along the way, we do a perturbation analysis of noisy AdaGrad, which is of independent interest. Our utility guarantee for the private ERM problem follows as a corollary to the regret guarantee of noisy AdaGrad.
APA
Kairouz, P., Diaz, M.R., Rush, K. & Thakurta, A.. (2021). (Nearly) Dimension Independent Private ERM with AdaGrad Rates\{via Publicly Estimated Subspaces. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:2717-2746 Available from https://proceedings.mlr.press/v134/kairouz21a.html.

Related Material