(Near) Dimension Independent Risk Bounds for Differentially Private Learning

Prateek Jain, Abhradeep Guha Thakurta
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(1):476-484, 2014.

Abstract

In this paper, we study the problem of differentially private risk minimization where the goal is to provide differentially private algorithms that have small excess risk. In particular we address the following open problem: \emphIs it possible to design computationally efficient differentially private risk minimizers with excess risk bounds that do not explicitly depend on dimensionality (p) and do not require structural assumptions like restricted strong convexity? In this paper, we answer the question in the affirmative for a variant of the well-known \emphoutput and \emphobjective perturbation algorithms [Chaudhuri et al., 2011]. In particular, we show that in generalized linear model, variants of both output and objective perturbation algorithms have no \em explicit dependence on p. Our results assume that the underlying loss function is a 1-Lipschitz convex function and we show that the excess risk depends only on L_2 norm of the true risk minimizer and that of training points. Next, we present a novel privacy preserving algorithm for risk minimization over simplex in the generalized linear model, where the loss function is a doubly differentiable convex function. Assuming that the training points have bounded L_∞-norm, our algorithm provides risk bound that has only \em logarithmic dependence on p. We also apply our technique to the online learning setting and obtain a regret bound with similar logarithmic dependence on p. In contrast, the existing differentially private online learning methods incur O(\sqrtp) dependence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-jain14, title = {(Near) Dimension Independent Risk Bounds for Differentially Private Learning}, author = {Jain, Prateek and Thakurta, Abhradeep Guha}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {476--484}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/jain14.pdf}, url = {https://proceedings.mlr.press/v32/jain14.html}, abstract = {In this paper, we study the problem of differentially private risk minimization where the goal is to provide differentially private algorithms that have small excess risk. In particular we address the following open problem: \emphIs it possible to design computationally efficient differentially private risk minimizers with excess risk bounds that do not explicitly depend on dimensionality (p) and do not require structural assumptions like restricted strong convexity? In this paper, we answer the question in the affirmative for a variant of the well-known \emphoutput and \emphobjective perturbation algorithms [Chaudhuri et al., 2011]. In particular, we show that in generalized linear model, variants of both output and objective perturbation algorithms have no \em explicit dependence on p. Our results assume that the underlying loss function is a 1-Lipschitz convex function and we show that the excess risk depends only on L_2 norm of the true risk minimizer and that of training points. Next, we present a novel privacy preserving algorithm for risk minimization over simplex in the generalized linear model, where the loss function is a doubly differentiable convex function. Assuming that the training points have bounded L_∞-norm, our algorithm provides risk bound that has only \em logarithmic dependence on p. We also apply our technique to the online learning setting and obtain a regret bound with similar logarithmic dependence on p. In contrast, the existing differentially private online learning methods incur O(\sqrtp) dependence.} }
Endnote
%0 Conference Paper %T (Near) Dimension Independent Risk Bounds for Differentially Private Learning %A Prateek Jain %A Abhradeep Guha Thakurta %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-jain14 %I PMLR %P 476--484 %U https://proceedings.mlr.press/v32/jain14.html %V 32 %N 1 %X In this paper, we study the problem of differentially private risk minimization where the goal is to provide differentially private algorithms that have small excess risk. In particular we address the following open problem: \emphIs it possible to design computationally efficient differentially private risk minimizers with excess risk bounds that do not explicitly depend on dimensionality (p) and do not require structural assumptions like restricted strong convexity? In this paper, we answer the question in the affirmative for a variant of the well-known \emphoutput and \emphobjective perturbation algorithms [Chaudhuri et al., 2011]. In particular, we show that in generalized linear model, variants of both output and objective perturbation algorithms have no \em explicit dependence on p. Our results assume that the underlying loss function is a 1-Lipschitz convex function and we show that the excess risk depends only on L_2 norm of the true risk minimizer and that of training points. Next, we present a novel privacy preserving algorithm for risk minimization over simplex in the generalized linear model, where the loss function is a doubly differentiable convex function. Assuming that the training points have bounded L_∞-norm, our algorithm provides risk bound that has only \em logarithmic dependence on p. We also apply our technique to the online learning setting and obtain a regret bound with similar logarithmic dependence on p. In contrast, the existing differentially private online learning methods incur O(\sqrtp) dependence.
RIS
TY - CPAPER TI - (Near) Dimension Independent Risk Bounds for Differentially Private Learning AU - Prateek Jain AU - Abhradeep Guha Thakurta BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-jain14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 1 SP - 476 EP - 484 L1 - http://proceedings.mlr.press/v32/jain14.pdf UR - https://proceedings.mlr.press/v32/jain14.html AB - In this paper, we study the problem of differentially private risk minimization where the goal is to provide differentially private algorithms that have small excess risk. In particular we address the following open problem: \emphIs it possible to design computationally efficient differentially private risk minimizers with excess risk bounds that do not explicitly depend on dimensionality (p) and do not require structural assumptions like restricted strong convexity? In this paper, we answer the question in the affirmative for a variant of the well-known \emphoutput and \emphobjective perturbation algorithms [Chaudhuri et al., 2011]. In particular, we show that in generalized linear model, variants of both output and objective perturbation algorithms have no \em explicit dependence on p. Our results assume that the underlying loss function is a 1-Lipschitz convex function and we show that the excess risk depends only on L_2 norm of the true risk minimizer and that of training points. Next, we present a novel privacy preserving algorithm for risk minimization over simplex in the generalized linear model, where the loss function is a doubly differentiable convex function. Assuming that the training points have bounded L_∞-norm, our algorithm provides risk bound that has only \em logarithmic dependence on p. We also apply our technique to the online learning setting and obtain a regret bound with similar logarithmic dependence on p. In contrast, the existing differentially private online learning methods incur O(\sqrtp) dependence. ER -
APA
Jain, P. & Thakurta, A.G.. (2014). (Near) Dimension Independent Risk Bounds for Differentially Private Learning. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(1):476-484 Available from https://proceedings.mlr.press/v32/jain14.html.

Related Material