Old Techniques in Differentially Private Linear Regression

Or Sheffet
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:789-827, 2019.

Abstract

We introduce three novel differentially private algorithms that approximate the $2^{\rm nd}$-moment matrix of the data. These algorithms, which in contrast to existing algorithms always output positive-definite matrices, correspond to existing techniques in linear regression literature. Thus these techniques have an immediate interpretation and all results known about these techniques are straight-forwardly applicable to the outputs of these algorithms. More specifically, we discuss the following three techniques. (i) For Ridge Regression, we propose setting the regularization coefficient so that by approximating the solution using Johnson-Lindenstrauss transform we preserve privacy. (ii) We show that adding a batch of $d+O(\epsilon^{-2})$ random samples to our data preserves differential privacy. (iii) We show that sampling the $2^{\rm nd}$-moment matrix from a Bayesian posterior inverse-Wishart distribution is differentially private. We also give utility bounds for our algorithms and compare them with the existing “Analyze Gauss” algorithm of Dwork et al.

Cite this Paper


BibTeX
@InProceedings{pmlr-v98-sheffet19a, title = {Old Techniques in Differentially Private Linear Regression}, author = {Sheffet, Or}, booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory}, pages = {789--827}, year = {2019}, editor = {Garivier, Aurélien and Kale, Satyen}, volume = {98}, series = {Proceedings of Machine Learning Research}, month = {22--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v98/sheffet19a/sheffet19a.pdf}, url = {https://proceedings.mlr.press/v98/sheffet19a.html}, abstract = {We introduce three novel differentially private algorithms that approximate the $2^{\rm nd}$-moment matrix of the data. These algorithms, which in contrast to existing algorithms always output positive-definite matrices, correspond to existing techniques in linear regression literature. Thus these techniques have an immediate interpretation and all results known about these techniques are straight-forwardly applicable to the outputs of these algorithms. More specifically, we discuss the following three techniques. (i) For Ridge Regression, we propose setting the regularization coefficient so that by approximating the solution using Johnson-Lindenstrauss transform we preserve privacy. (ii) We show that adding a batch of $d+O(\epsilon^{-2})$ random samples to our data preserves differential privacy. (iii) We show that sampling the $2^{\rm nd}$-moment matrix from a Bayesian posterior inverse-Wishart distribution is differentially private. We also give utility bounds for our algorithms and compare them with the existing “Analyze Gauss” algorithm of Dwork et al.} }
Endnote
%0 Conference Paper %T Old Techniques in Differentially Private Linear Regression %A Or Sheffet %B Proceedings of the 30th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Aurélien Garivier %E Satyen Kale %F pmlr-v98-sheffet19a %I PMLR %P 789--827 %U https://proceedings.mlr.press/v98/sheffet19a.html %V 98 %X We introduce three novel differentially private algorithms that approximate the $2^{\rm nd}$-moment matrix of the data. These algorithms, which in contrast to existing algorithms always output positive-definite matrices, correspond to existing techniques in linear regression literature. Thus these techniques have an immediate interpretation and all results known about these techniques are straight-forwardly applicable to the outputs of these algorithms. More specifically, we discuss the following three techniques. (i) For Ridge Regression, we propose setting the regularization coefficient so that by approximating the solution using Johnson-Lindenstrauss transform we preserve privacy. (ii) We show that adding a batch of $d+O(\epsilon^{-2})$ random samples to our data preserves differential privacy. (iii) We show that sampling the $2^{\rm nd}$-moment matrix from a Bayesian posterior inverse-Wishart distribution is differentially private. We also give utility bounds for our algorithms and compare them with the existing “Analyze Gauss” algorithm of Dwork et al.
APA
Sheffet, O.. (2019). Old Techniques in Differentially Private Linear Regression. Proceedings of the 30th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 98:789-827 Available from https://proceedings.mlr.press/v98/sheffet19a.html.

Related Material