Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5796-5805, 2018.

Abstract

Alternating direction method of multiplier (ADMM) is a popular method used to design distributed versions of a machine learning algorithm, whereby local computations are performed on local data with the output exchanged among neighbors in an iterative fashion. During this iterative process the leakage of data privacy arises. A differentially private ADMM was proposed in prior work (Zhang & Zhu, 2017) where only the privacy loss of a single node during one iteration was bounded, a method that makes it difficult to balance the tradeoff between the utility attained through distributed computation and privacy guarantees when considering the total privacy loss of all nodes over the entire iterative process. We propose a perturbation method for ADMM where the perturbed term is correlated with the penalty parameters; this is shown to improve the utility and privacy simultaneously. The method is based on a modified ADMM where each node independently determines its own penalty parameter in every iteration and decouples it from the dual updating step size. The condition for convergence of the modified ADMM and the lower bound on the convergence rate are also derived.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-zhang18f, title = {Improving the Privacy and Accuracy of {ADMM}-Based Distributed Algorithms}, author = {Zhang, Xueru and Khalili, Mohammad Mahdi and Liu, Mingyan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5796--5805}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/zhang18f/zhang18f.pdf}, url = {https://proceedings.mlr.press/v80/zhang18f.html}, abstract = {Alternating direction method of multiplier (ADMM) is a popular method used to design distributed versions of a machine learning algorithm, whereby local computations are performed on local data with the output exchanged among neighbors in an iterative fashion. During this iterative process the leakage of data privacy arises. A differentially private ADMM was proposed in prior work (Zhang & Zhu, 2017) where only the privacy loss of a single node during one iteration was bounded, a method that makes it difficult to balance the tradeoff between the utility attained through distributed computation and privacy guarantees when considering the total privacy loss of all nodes over the entire iterative process. We propose a perturbation method for ADMM where the perturbed term is correlated with the penalty parameters; this is shown to improve the utility and privacy simultaneously. The method is based on a modified ADMM where each node independently determines its own penalty parameter in every iteration and decouples it from the dual updating step size. The condition for convergence of the modified ADMM and the lower bound on the convergence rate are also derived.} }
Endnote
%0 Conference Paper %T Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms %A Xueru Zhang %A Mohammad Mahdi Khalili %A Mingyan Liu %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-zhang18f %I PMLR %P 5796--5805 %U https://proceedings.mlr.press/v80/zhang18f.html %V 80 %X Alternating direction method of multiplier (ADMM) is a popular method used to design distributed versions of a machine learning algorithm, whereby local computations are performed on local data with the output exchanged among neighbors in an iterative fashion. During this iterative process the leakage of data privacy arises. A differentially private ADMM was proposed in prior work (Zhang & Zhu, 2017) where only the privacy loss of a single node during one iteration was bounded, a method that makes it difficult to balance the tradeoff between the utility attained through distributed computation and privacy guarantees when considering the total privacy loss of all nodes over the entire iterative process. We propose a perturbation method for ADMM where the perturbed term is correlated with the penalty parameters; this is shown to improve the utility and privacy simultaneously. The method is based on a modified ADMM where each node independently determines its own penalty parameter in every iteration and decouples it from the dual updating step size. The condition for convergence of the modified ADMM and the lower bound on the convergence rate are also derived.
APA
Zhang, X., Khalili, M.M. & Liu, M.. (2018). Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5796-5805 Available from https://proceedings.mlr.press/v80/zhang18f.html.

Related Material