Residual Splash for Optimally Parallelizing Belief Propagation

Joseph Gonzalez, Yucheng Low, Carlos Guestrin
Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, PMLR 5:177-184, 2009.

Abstract

As computer architectures move towards parallelism we must build a new theoretical understanding of parallelism in machine learning. In this paper we focus on parallelizing message passing inference algorithms in graphical models. We develop a theoretical understanding of the limitations of parallelism in belief propagation and bound the optimal achievable running parallel performance on a certain class of graphical models. We demonstrate that the fully synchronous parallelization of belief propagation is highly inefficient. We provide a new parallel belief propagation which achieves optimal performance on a certain class of graphical models. Using two challenging real-world problems, we empirically evaluate the performance of our algorithm. On the real-world problems, we find that our new algorithm achieves near linear performance improvements and out performs alternative parallel belief propagation algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-gonzalez09a, title = {Residual Splash for Optimally Parallelizing Belief Propagation}, author = {Gonzalez, Joseph and Low, Yucheng and Guestrin, Carlos}, booktitle = {Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics}, pages = {177--184}, year = {2009}, editor = {van Dyk, David and Welling, Max}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/gonzalez09a/gonzalez09a.pdf}, url = {https://proceedings.mlr.press/v5/gonzalez09a.html}, abstract = {As computer architectures move towards parallelism we must build a new theoretical understanding of parallelism in machine learning. In this paper we focus on parallelizing message passing inference algorithms in graphical models. We develop a theoretical understanding of the limitations of parallelism in belief propagation and bound the optimal achievable running parallel performance on a certain class of graphical models. We demonstrate that the fully synchronous parallelization of belief propagation is highly inefficient. We provide a new parallel belief propagation which achieves optimal performance on a certain class of graphical models. Using two challenging real-world problems, we empirically evaluate the performance of our algorithm. On the real-world problems, we find that our new algorithm achieves near linear performance improvements and out performs alternative parallel belief propagation algorithms.} }
Endnote
%0 Conference Paper %T Residual Splash for Optimally Parallelizing Belief Propagation %A Joseph Gonzalez %A Yucheng Low %A Carlos Guestrin %B Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-gonzalez09a %I PMLR %P 177--184 %U https://proceedings.mlr.press/v5/gonzalez09a.html %V 5 %X As computer architectures move towards parallelism we must build a new theoretical understanding of parallelism in machine learning. In this paper we focus on parallelizing message passing inference algorithms in graphical models. We develop a theoretical understanding of the limitations of parallelism in belief propagation and bound the optimal achievable running parallel performance on a certain class of graphical models. We demonstrate that the fully synchronous parallelization of belief propagation is highly inefficient. We provide a new parallel belief propagation which achieves optimal performance on a certain class of graphical models. Using two challenging real-world problems, we empirically evaluate the performance of our algorithm. On the real-world problems, we find that our new algorithm achieves near linear performance improvements and out performs alternative parallel belief propagation algorithms.
RIS
TY - CPAPER TI - Residual Splash for Optimally Parallelizing Belief Propagation AU - Joseph Gonzalez AU - Yucheng Low AU - Carlos Guestrin BT - Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-gonzalez09a PB - PMLR DP - Proceedings of Machine Learning Research VL - 5 SP - 177 EP - 184 L1 - http://proceedings.mlr.press/v5/gonzalez09a/gonzalez09a.pdf UR - https://proceedings.mlr.press/v5/gonzalez09a.html AB - As computer architectures move towards parallelism we must build a new theoretical understanding of parallelism in machine learning. In this paper we focus on parallelizing message passing inference algorithms in graphical models. We develop a theoretical understanding of the limitations of parallelism in belief propagation and bound the optimal achievable running parallel performance on a certain class of graphical models. We demonstrate that the fully synchronous parallelization of belief propagation is highly inefficient. We provide a new parallel belief propagation which achieves optimal performance on a certain class of graphical models. Using two challenging real-world problems, we empirically evaluate the performance of our algorithm. On the real-world problems, we find that our new algorithm achieves near linear performance improvements and out performs alternative parallel belief propagation algorithms. ER -
APA
Gonzalez, J., Low, Y. & Guestrin, C.. (2009). Residual Splash for Optimally Parallelizing Belief Propagation. Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 5:177-184 Available from https://proceedings.mlr.press/v5/gonzalez09a.html.

Related Material