Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing

Huayan Wang, Koller Daphne
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):190-198, 2013.

Abstract

Max-product (max-sum) message passing algorithms are widely used for MAP inference in MRFs. It has many variants sharing a common flavor of passing "messages" over some graph-object. Recent advances revealed that its convergent versions (such as MPLP, MSD, TRW-S) can be viewed as performing block coordinate descent (BCD) in a dual objective. That is, each BCD step achieves dual-optimal w.r.t. a block of dual variables (messages), thereby decreases the dual objective monotonically. However, most existing algorithms are limited to updating blocks selected in rather restricted ways. In this paper, we show a "unified" message passing algorithm that: (a) subsumes MPLP, MSD, and TRW-S as special cases when applied to their respective choices of dual objective and blocks, and (b) is able to perform BCD under much more flexible choices of blocks (including very large blocks) as well as the dual objective itself (that arise from an arbitrary dual decomposition).

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-wang13b, title = {Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing}, author = {Wang, Huayan and Daphne, Koller}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {190--198}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/wang13b.pdf}, url = {https://proceedings.mlr.press/v28/wang13b.html}, abstract = {Max-product (max-sum) message passing algorithms are widely used for MAP inference in MRFs. It has many variants sharing a common flavor of passing "messages" over some graph-object. Recent advances revealed that its convergent versions (such as MPLP, MSD, TRW-S) can be viewed as performing block coordinate descent (BCD) in a dual objective. That is, each BCD step achieves dual-optimal w.r.t. a block of dual variables (messages), thereby decreases the dual objective monotonically. However, most existing algorithms are limited to updating blocks selected in rather restricted ways. In this paper, we show a "unified" message passing algorithm that: (a) subsumes MPLP, MSD, and TRW-S as special cases when applied to their respective choices of dual objective and blocks, and (b) is able to perform BCD under much more flexible choices of blocks (including very large blocks) as well as the dual objective itself (that arise from an arbitrary dual decomposition).} }
Endnote
%0 Conference Paper %T Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing %A Huayan Wang %A Koller Daphne %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-wang13b %I PMLR %P 190--198 %U https://proceedings.mlr.press/v28/wang13b.html %V 28 %N 2 %X Max-product (max-sum) message passing algorithms are widely used for MAP inference in MRFs. It has many variants sharing a common flavor of passing "messages" over some graph-object. Recent advances revealed that its convergent versions (such as MPLP, MSD, TRW-S) can be viewed as performing block coordinate descent (BCD) in a dual objective. That is, each BCD step achieves dual-optimal w.r.t. a block of dual variables (messages), thereby decreases the dual objective monotonically. However, most existing algorithms are limited to updating blocks selected in rather restricted ways. In this paper, we show a "unified" message passing algorithm that: (a) subsumes MPLP, MSD, and TRW-S as special cases when applied to their respective choices of dual objective and blocks, and (b) is able to perform BCD under much more flexible choices of blocks (including very large blocks) as well as the dual objective itself (that arise from an arbitrary dual decomposition).
RIS
TY - CPAPER TI - Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing AU - Huayan Wang AU - Koller Daphne BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-wang13b PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 2 SP - 190 EP - 198 L1 - http://proceedings.mlr.press/v28/wang13b.pdf UR - https://proceedings.mlr.press/v28/wang13b.html AB - Max-product (max-sum) message passing algorithms are widely used for MAP inference in MRFs. It has many variants sharing a common flavor of passing "messages" over some graph-object. Recent advances revealed that its convergent versions (such as MPLP, MSD, TRW-S) can be viewed as performing block coordinate descent (BCD) in a dual objective. That is, each BCD step achieves dual-optimal w.r.t. a block of dual variables (messages), thereby decreases the dual objective monotonically. However, most existing algorithms are limited to updating blocks selected in rather restricted ways. In this paper, we show a "unified" message passing algorithm that: (a) subsumes MPLP, MSD, and TRW-S as special cases when applied to their respective choices of dual objective and blocks, and (b) is able to perform BCD under much more flexible choices of blocks (including very large blocks) as well as the dual objective itself (that arise from an arbitrary dual decomposition). ER -
APA
Wang, H. & Daphne, K.. (2013). Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(2):190-198 Available from https://proceedings.mlr.press/v28/wang13b.html.

Related Material