QLSD: Quantised Langevin Stochastic Dynamics for Bayesian Federated Learning

Maxime Vono, Vincent Plassier, Alain Durmus, Aymeric Dieuleveut, Eric Moulines
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:6459-6500, 2022.

Abstract

The objective of Federated Learning (FL) is to perform statistical inference for data which are decentralised and stored locally on networked clients. FL raises many constraints which include privacy and data ownership, communication overhead, statistical heterogeneity, and partial client participation. In this paper, we address these problems in the framework of the Bayesian paradigm. To this end, we propose a novel federated Markov Chain Monte Carlo algorithm, referred to as Quantised Langevin Stochastic Dynamics which may be seen as an extension to the FL setting of Stochastic Gradient Langevin Dynamics, which handles the communication bottleneck using gradient compression. To improve performance, we then introduce variance reduction techniques, which lead to two improved versions coined QLSD$^\star$ and QLSD$^{++}$. We give both non-asymptotic and asymptotic convergence guarantees for the proposed algorithms. We illustrate their performances using various Bayesian Federated Learning benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-vono22a, title = { QLSD: Quantised Langevin Stochastic Dynamics for Bayesian Federated Learning }, author = {Vono, Maxime and Plassier, Vincent and Durmus, Alain and Dieuleveut, Aymeric and Moulines, Eric}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {6459--6500}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/vono22a/vono22a.pdf}, url = {https://proceedings.mlr.press/v151/vono22a.html}, abstract = { The objective of Federated Learning (FL) is to perform statistical inference for data which are decentralised and stored locally on networked clients. FL raises many constraints which include privacy and data ownership, communication overhead, statistical heterogeneity, and partial client participation. In this paper, we address these problems in the framework of the Bayesian paradigm. To this end, we propose a novel federated Markov Chain Monte Carlo algorithm, referred to as Quantised Langevin Stochastic Dynamics which may be seen as an extension to the FL setting of Stochastic Gradient Langevin Dynamics, which handles the communication bottleneck using gradient compression. To improve performance, we then introduce variance reduction techniques, which lead to two improved versions coined QLSD$^\star$ and QLSD$^{++}$. We give both non-asymptotic and asymptotic convergence guarantees for the proposed algorithms. We illustrate their performances using various Bayesian Federated Learning benchmarks. } }
Endnote
%0 Conference Paper %T QLSD: Quantised Langevin Stochastic Dynamics for Bayesian Federated Learning %A Maxime Vono %A Vincent Plassier %A Alain Durmus %A Aymeric Dieuleveut %A Eric Moulines %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-vono22a %I PMLR %P 6459--6500 %U https://proceedings.mlr.press/v151/vono22a.html %V 151 %X The objective of Federated Learning (FL) is to perform statistical inference for data which are decentralised and stored locally on networked clients. FL raises many constraints which include privacy and data ownership, communication overhead, statistical heterogeneity, and partial client participation. In this paper, we address these problems in the framework of the Bayesian paradigm. To this end, we propose a novel federated Markov Chain Monte Carlo algorithm, referred to as Quantised Langevin Stochastic Dynamics which may be seen as an extension to the FL setting of Stochastic Gradient Langevin Dynamics, which handles the communication bottleneck using gradient compression. To improve performance, we then introduce variance reduction techniques, which lead to two improved versions coined QLSD$^\star$ and QLSD$^{++}$. We give both non-asymptotic and asymptotic convergence guarantees for the proposed algorithms. We illustrate their performances using various Bayesian Federated Learning benchmarks.
APA
Vono, M., Plassier, V., Durmus, A., Dieuleveut, A. & Moulines, E.. (2022). QLSD: Quantised Langevin Stochastic Dynamics for Bayesian Federated Learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:6459-6500 Available from https://proceedings.mlr.press/v151/vono22a.html.

Related Material