[edit]
Message passing with l1 penalized KL minimization
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):262-270, 2013.
Abstract
Bayesian inference is often hampered by large computational expense. As a generalization of belief propagation (BP), expectation propagation (EP) approximates exact Bayesian computation with efficient message passing updates. However, when an approximation family used by EP is far from exact posterior distributions, message passing may lead to poor approximation quality and suffer from divergence. To address this issue, we propose an approximate inference method, relaxed expectation propagation(REP), based on a new divergence with a l1 penalty. Minimizing this penalized divergence adaptively relaxes EP’s moment matching requirement for message passing. We apply REP to Gaussian process classification and experimental results demonstrate significant improvement of REP over EP and alpha-divergence based power EP – in terms of algorithmic stability, estimation accuracy, and predictive performance. Furthermore, we develop relaxed belief propagation(RBP), a special case of REP, to conduct inference on discrete Markov random fields (MRFs). Our results show improved estimation accuracy of RBP over BP and fractional BP when interactions between MRF nodes are strong.