Message passing with l1 penalized KL minimization

Yuan Qi, Yandong Guo
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):262-270, 2013.

Abstract

Bayesian inference is often hampered by large computational expense. As a generalization of belief propagation (BP), expectation propagation (EP) approximates exact Bayesian computation with efficient message passing updates. However, when an approximation family used by EP is far from exact posterior distributions, message passing may lead to poor approximation quality and suffer from divergence. To address this issue, we propose an approximate inference method, relaxed expectation propagation(REP), based on a new divergence with a l1 penalty. Minimizing this penalized divergence adaptively relaxes EP’s moment matching requirement for message passing. We apply REP to Gaussian process classification and experimental results demonstrate significant improvement of REP over EP and alpha-divergence based power EP – in terms of algorithmic stability, estimation accuracy, and predictive performance. Furthermore, we develop relaxed belief propagation(RBP), a special case of REP, to conduct inference on discrete Markov random fields (MRFs). Our results show improved estimation accuracy of RBP over BP and fractional BP when interactions between MRF nodes are strong.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-qi13, title = {Message passing with l1 penalized KL minimization}, author = {Qi, Yuan and Guo, Yandong}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {262--270}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/qi13.pdf}, url = {https://proceedings.mlr.press/v28/qi13.html}, abstract = {Bayesian inference is often hampered by large computational expense. As a generalization of belief propagation (BP), expectation propagation (EP) approximates exact Bayesian computation with efficient message passing updates. However, when an approximation family used by EP is far from exact posterior distributions, message passing may lead to poor approximation quality and suffer from divergence. To address this issue, we propose an approximate inference method, relaxed expectation propagation(REP), based on a new divergence with a l1 penalty. Minimizing this penalized divergence adaptively relaxes EP’s moment matching requirement for message passing. We apply REP to Gaussian process classification and experimental results demonstrate significant improvement of REP over EP and alpha-divergence based power EP – in terms of algorithmic stability, estimation accuracy, and predictive performance. Furthermore, we develop relaxed belief propagation(RBP), a special case of REP, to conduct inference on discrete Markov random fields (MRFs). Our results show improved estimation accuracy of RBP over BP and fractional BP when interactions between MRF nodes are strong.} }
Endnote
%0 Conference Paper %T Message passing with l1 penalized KL minimization %A Yuan Qi %A Yandong Guo %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-qi13 %I PMLR %P 262--270 %U https://proceedings.mlr.press/v28/qi13.html %V 28 %N 3 %X Bayesian inference is often hampered by large computational expense. As a generalization of belief propagation (BP), expectation propagation (EP) approximates exact Bayesian computation with efficient message passing updates. However, when an approximation family used by EP is far from exact posterior distributions, message passing may lead to poor approximation quality and suffer from divergence. To address this issue, we propose an approximate inference method, relaxed expectation propagation(REP), based on a new divergence with a l1 penalty. Minimizing this penalized divergence adaptively relaxes EP’s moment matching requirement for message passing. We apply REP to Gaussian process classification and experimental results demonstrate significant improvement of REP over EP and alpha-divergence based power EP – in terms of algorithmic stability, estimation accuracy, and predictive performance. Furthermore, we develop relaxed belief propagation(RBP), a special case of REP, to conduct inference on discrete Markov random fields (MRFs). Our results show improved estimation accuracy of RBP over BP and fractional BP when interactions between MRF nodes are strong.
RIS
TY - CPAPER TI - Message passing with l1 penalized KL minimization AU - Yuan Qi AU - Yandong Guo BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-qi13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 262 EP - 270 L1 - http://proceedings.mlr.press/v28/qi13.pdf UR - https://proceedings.mlr.press/v28/qi13.html AB - Bayesian inference is often hampered by large computational expense. As a generalization of belief propagation (BP), expectation propagation (EP) approximates exact Bayesian computation with efficient message passing updates. However, when an approximation family used by EP is far from exact posterior distributions, message passing may lead to poor approximation quality and suffer from divergence. To address this issue, we propose an approximate inference method, relaxed expectation propagation(REP), based on a new divergence with a l1 penalty. Minimizing this penalized divergence adaptively relaxes EP’s moment matching requirement for message passing. We apply REP to Gaussian process classification and experimental results demonstrate significant improvement of REP over EP and alpha-divergence based power EP – in terms of algorithmic stability, estimation accuracy, and predictive performance. Furthermore, we develop relaxed belief propagation(RBP), a special case of REP, to conduct inference on discrete Markov random fields (MRFs). Our results show improved estimation accuracy of RBP over BP and fractional BP when interactions between MRF nodes are strong. ER -
APA
Qi, Y. & Guo, Y.. (2013). Message passing with l1 penalized KL minimization. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):262-270 Available from https://proceedings.mlr.press/v28/qi13.html.

Related Material