Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):190-198, 2013.
Abstract
Max-product (max-sum) message passing algorithms are widely used for MAP inference in MRFs. It has many variants sharing a common flavor of passing "messages" over some graph-object. Recent advances revealed that its convergent versions (such as MPLP, MSD, TRW-S) can be viewed as performing block coordinate descent (BCD) in a dual objective. That is, each BCD step achieves dual-optimal w.r.t. a block of dual variables (messages), thereby decreases the dual objective monotonically. However, most existing algorithms are limited to updating blocks selected in rather restricted ways. In this paper, we show a "unified" message passing algorithm that: (a) subsumes MPLP, MSD, and TRW-S as special cases when applied to their respective choices of dual objective and blocks, and (b) is able to perform BCD under much more flexible choices of blocks (including very large blocks) as well as the dual objective itself (that arise from an arbitrary dual decomposition).
@InProceedings{pmlr-v28-wang13b,
title = {Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing},
author = {Huayan Wang and Koller Daphne},
booktitle = {Proceedings of the 30th International Conference on Machine Learning},
pages = {190--198},
year = {2013},
editor = {Sanjoy Dasgupta and David McAllester},
volume = {28},
number = {2},
series = {Proceedings of Machine Learning Research},
address = {Atlanta, Georgia, USA},
month = {17--19 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v28/wang13b.pdf},
url = {http://proceedings.mlr.press/v28/wang13b.html},
abstract = {Max-product (max-sum) message passing algorithms are widely used for MAP inference in MRFs. It has many variants sharing a common flavor of passing "messages" over some graph-object. Recent advances revealed that its convergent versions (such as MPLP, MSD, TRW-S) can be viewed as performing block coordinate descent (BCD) in a dual objective. That is, each BCD step achieves dual-optimal w.r.t. a block of dual variables (messages), thereby decreases the dual objective monotonically. However, most existing algorithms are limited to updating blocks selected in rather restricted ways. In this paper, we show a "unified" message passing algorithm that: (a) subsumes MPLP, MSD, and TRW-S as special cases when applied to their respective choices of dual objective and blocks, and (b) is able to perform BCD under much more flexible choices of blocks (including very large blocks) as well as the dual objective itself (that arise from an arbitrary dual decomposition).}
}
%0 Conference Paper
%T Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing
%A Huayan Wang
%A Koller Daphne
%B Proceedings of the 30th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2013
%E Sanjoy Dasgupta
%E David McAllester
%F pmlr-v28-wang13b
%I PMLR
%J Proceedings of Machine Learning Research
%P 190--198
%U http://proceedings.mlr.press
%V 28
%N 2
%W PMLR
%X Max-product (max-sum) message passing algorithms are widely used for MAP inference in MRFs. It has many variants sharing a common flavor of passing "messages" over some graph-object. Recent advances revealed that its convergent versions (such as MPLP, MSD, TRW-S) can be viewed as performing block coordinate descent (BCD) in a dual objective. That is, each BCD step achieves dual-optimal w.r.t. a block of dual variables (messages), thereby decreases the dual objective monotonically. However, most existing algorithms are limited to updating blocks selected in rather restricted ways. In this paper, we show a "unified" message passing algorithm that: (a) subsumes MPLP, MSD, and TRW-S as special cases when applied to their respective choices of dual objective and blocks, and (b) is able to perform BCD under much more flexible choices of blocks (including very large blocks) as well as the dual objective itself (that arise from an arbitrary dual decomposition).
TY - CPAPER
TI - Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing
AU - Huayan Wang
AU - Koller Daphne
BT - Proceedings of the 30th International Conference on Machine Learning
PY - 2013/02/13
DA - 2013/02/13
ED - Sanjoy Dasgupta
ED - David McAllester
ID - pmlr-v28-wang13b
PB - PMLR
SP - 190
DP - PMLR
EP - 198
L1 - http://proceedings.mlr.press/v28/wang13b.pdf
UR - http://proceedings.mlr.press/v28/wang13b.html
AB - Max-product (max-sum) message passing algorithms are widely used for MAP inference in MRFs. It has many variants sharing a common flavor of passing "messages" over some graph-object. Recent advances revealed that its convergent versions (such as MPLP, MSD, TRW-S) can be viewed as performing block coordinate descent (BCD) in a dual objective. That is, each BCD step achieves dual-optimal w.r.t. a block of dual variables (messages), thereby decreases the dual objective monotonically. However, most existing algorithms are limited to updating blocks selected in rather restricted ways. In this paper, we show a "unified" message passing algorithm that: (a) subsumes MPLP, MSD, and TRW-S as special cases when applied to their respective choices of dual objective and blocks, and (b) is able to perform BCD under much more flexible choices of blocks (including very large blocks) as well as the dual objective itself (that arise from an arbitrary dual decomposition).
ER -
Wang, H. & Daphne, K.. (2013). Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(2):190-198
This site last compiled Tue, 14 Nov 2017 21:10:07 +0000