Privately Learning Markov Random Fields

Huanyu Zhang, Gautam Kamath, Janardhan Kulkarni, Steven Wu
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11129-11140, 2020.

Abstract

We consider the problem of learning Markov Random Fields (including the prototypical example, the Ising model) under the constraint of differential privacy. Our learning goals include both \emph{structure learning}, where we try to estimate the underlying graph structure of the model, as well as the harder goal of \emph{parameter learning}, in which we additionally estimate the parameter on each edge. We provide algorithms and lower bounds for both problems under a variety of privacy constraints – namely pure, concentrated, and approximate differential privacy. While non-privately, both learning goals enjoy roughly the same complexity, we show that this is not the case under differential privacy. In particular, only structure learning under approximate differential privacy maintains the non-private logarithmic dependence on the dimensionality of the data, while a change in either the learning goal or the privacy notion would necessitate a polynomial dependence. As a result, we show that the privacy constraint imposes a strong separation between these two learning problems in the high-dimensional data regime.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhang20l, title = {Privately Learning {M}arkov Random Fields}, author = {Zhang, Huanyu and Kamath, Gautam and Kulkarni, Janardhan and Wu, Steven}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11129--11140}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhang20l/zhang20l.pdf}, url = {https://proceedings.mlr.press/v119/zhang20l.html}, abstract = {We consider the problem of learning Markov Random Fields (including the prototypical example, the Ising model) under the constraint of differential privacy. Our learning goals include both \emph{structure learning}, where we try to estimate the underlying graph structure of the model, as well as the harder goal of \emph{parameter learning}, in which we additionally estimate the parameter on each edge. We provide algorithms and lower bounds for both problems under a variety of privacy constraints – namely pure, concentrated, and approximate differential privacy. While non-privately, both learning goals enjoy roughly the same complexity, we show that this is not the case under differential privacy. In particular, only structure learning under approximate differential privacy maintains the non-private logarithmic dependence on the dimensionality of the data, while a change in either the learning goal or the privacy notion would necessitate a polynomial dependence. As a result, we show that the privacy constraint imposes a strong separation between these two learning problems in the high-dimensional data regime.} }
Endnote
%0 Conference Paper %T Privately Learning Markov Random Fields %A Huanyu Zhang %A Gautam Kamath %A Janardhan Kulkarni %A Steven Wu %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhang20l %I PMLR %P 11129--11140 %U https://proceedings.mlr.press/v119/zhang20l.html %V 119 %X We consider the problem of learning Markov Random Fields (including the prototypical example, the Ising model) under the constraint of differential privacy. Our learning goals include both \emph{structure learning}, where we try to estimate the underlying graph structure of the model, as well as the harder goal of \emph{parameter learning}, in which we additionally estimate the parameter on each edge. We provide algorithms and lower bounds for both problems under a variety of privacy constraints – namely pure, concentrated, and approximate differential privacy. While non-privately, both learning goals enjoy roughly the same complexity, we show that this is not the case under differential privacy. In particular, only structure learning under approximate differential privacy maintains the non-private logarithmic dependence on the dimensionality of the data, while a change in either the learning goal or the privacy notion would necessitate a polynomial dependence. As a result, we show that the privacy constraint imposes a strong separation between these two learning problems in the high-dimensional data regime.
APA
Zhang, H., Kamath, G., Kulkarni, J. & Wu, S.. (2020). Privately Learning Markov Random Fields. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11129-11140 Available from https://proceedings.mlr.press/v119/zhang20l.html.

Related Material