Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible

Kai Zheng, Wenlong Mou, Liwei Wang
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:4130-4139, 2017.

Abstract

Non-interactive Local Differential Privacy (LDP) requires data analysts to collect data from users through noisy channel at once. In this paper, we extend the frontiers of Non-interactive LDP learning and estimation from several aspects. For learning with smooth generalized linear losses, we propose an approximate stochastic gradient oracle estimated from non-interactive LDP channel using Chebyshev expansion, which is combined with inexact gradient methods to obtain an efficient algorithm with quasi-polynomial sample complexity bound. For the high-dimensional world, we discover that under $\ell_2$-norm assumption on data points, high-dimensional sparse linear regression and mean estimation can be achieved with logarithmic dependence on dimension, using random projection and approximate recovery. We also extend our methods to Kernel Ridge Regression. Our work is the first one that makes learning and estimation possible for a broad range of learning tasks under non-interactive LDP model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-zheng17c, title = {Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible}, author = {Kai Zheng and Wenlong Mou and Liwei Wang}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {4130--4139}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/zheng17c/zheng17c.pdf}, url = {https://proceedings.mlr.press/v70/zheng17c.html}, abstract = {Non-interactive Local Differential Privacy (LDP) requires data analysts to collect data from users through noisy channel at once. In this paper, we extend the frontiers of Non-interactive LDP learning and estimation from several aspects. For learning with smooth generalized linear losses, we propose an approximate stochastic gradient oracle estimated from non-interactive LDP channel using Chebyshev expansion, which is combined with inexact gradient methods to obtain an efficient algorithm with quasi-polynomial sample complexity bound. For the high-dimensional world, we discover that under $\ell_2$-norm assumption on data points, high-dimensional sparse linear regression and mean estimation can be achieved with logarithmic dependence on dimension, using random projection and approximate recovery. We also extend our methods to Kernel Ridge Regression. Our work is the first one that makes learning and estimation possible for a broad range of learning tasks under non-interactive LDP model.} }
Endnote
%0 Conference Paper %T Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible %A Kai Zheng %A Wenlong Mou %A Liwei Wang %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-zheng17c %I PMLR %P 4130--4139 %U https://proceedings.mlr.press/v70/zheng17c.html %V 70 %X Non-interactive Local Differential Privacy (LDP) requires data analysts to collect data from users through noisy channel at once. In this paper, we extend the frontiers of Non-interactive LDP learning and estimation from several aspects. For learning with smooth generalized linear losses, we propose an approximate stochastic gradient oracle estimated from non-interactive LDP channel using Chebyshev expansion, which is combined with inexact gradient methods to obtain an efficient algorithm with quasi-polynomial sample complexity bound. For the high-dimensional world, we discover that under $\ell_2$-norm assumption on data points, high-dimensional sparse linear regression and mean estimation can be achieved with logarithmic dependence on dimension, using random projection and approximate recovery. We also extend our methods to Kernel Ridge Regression. Our work is the first one that makes learning and estimation possible for a broad range of learning tasks under non-interactive LDP model.
APA
Zheng, K., Mou, W. & Wang, L.. (2017). Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:4130-4139 Available from https://proceedings.mlr.press/v70/zheng17c.html.

Related Material