Greedy Direction Method of Multiplier for MAP Inference of Large Output Domain

Xiangru Huang, Ian En-Hsu Yen, Ruohan Zhang, Qixing Huang, Pradeep Ravikumar, Inderjit Dhillon
; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1550-1559, 2017.

Abstract

Maximum-a-Posteriori (MAP) inference lies at the heart of Graphical Models and Structured Prediction. Despite the intractability of exact MAP inference, approximated methods based on LP relaxations have exhibited superior performance across a wide range of applications. Yet for problems involving large output domains (i.e., the state space for each variable is large), standard LP relaxations can easily give rise to a large number of variables and constraints which are beyond the limit of existing optimization algorithms. In this paper, we introduce an effective MAP inference method for problems with large output domains. The method builds upon alternating minimization of an Augmented Lagrangian that exploits the sparsity of messages through greedy optimization techniques. A key feature of our greedy approach is to introduce variables in an on-demand manner with a pre-built data structure over local factors. This results in a single-loop algorithm of sublinear cost per iteration and O(log(1/epsilon))-type iteration complexity to achieve epsilon sub-optimality. In addition, we introduce a variant of GDMM for binary MAP inference problems with a large number of factors. Empirically, the proposed algorithms demonstrate orders of magnitude speedup over state-of-the-art MAP inference techniques on MAP inference problems including Segmentation, Alignment, Protein Folding, Graph Matching, and Pairwise-Interacted Multilabel Prediction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-huang17a, title = {{Greedy Direction Method of Multiplier for MAP Inference of Large Output Domain}}, author = {Xiangru Huang and Ian En-Hsu Yen and Ruohan Zhang and Qixing Huang and Pradeep Ravikumar and Inderjit Dhillon}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {1550--1559}, year = {2017}, editor = {Aarti Singh and Jerry Zhu}, volume = {54}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/huang17a/huang17a.pdf}, url = {http://proceedings.mlr.press/v54/huang17a.html}, abstract = {Maximum-a-Posteriori (MAP) inference lies at the heart of Graphical Models and Structured Prediction. Despite the intractability of exact MAP inference, approximated methods based on LP relaxations have exhibited superior performance across a wide range of applications. Yet for problems involving large output domains (i.e., the state space for each variable is large), standard LP relaxations can easily give rise to a large number of variables and constraints which are beyond the limit of existing optimization algorithms. In this paper, we introduce an effective MAP inference method for problems with large output domains. The method builds upon alternating minimization of an Augmented Lagrangian that exploits the sparsity of messages through greedy optimization techniques. A key feature of our greedy approach is to introduce variables in an on-demand manner with a pre-built data structure over local factors. This results in a single-loop algorithm of sublinear cost per iteration and O(log(1/epsilon))-type iteration complexity to achieve epsilon sub-optimality. In addition, we introduce a variant of GDMM for binary MAP inference problems with a large number of factors. Empirically, the proposed algorithms demonstrate orders of magnitude speedup over state-of-the-art MAP inference techniques on MAP inference problems including Segmentation, Alignment, Protein Folding, Graph Matching, and Pairwise-Interacted Multilabel Prediction.} }
Endnote
%0 Conference Paper %T Greedy Direction Method of Multiplier for MAP Inference of Large Output Domain %A Xiangru Huang %A Ian En-Hsu Yen %A Ruohan Zhang %A Qixing Huang %A Pradeep Ravikumar %A Inderjit Dhillon %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-huang17a %I PMLR %J Proceedings of Machine Learning Research %P 1550--1559 %U http://proceedings.mlr.press %V 54 %W PMLR %X Maximum-a-Posteriori (MAP) inference lies at the heart of Graphical Models and Structured Prediction. Despite the intractability of exact MAP inference, approximated methods based on LP relaxations have exhibited superior performance across a wide range of applications. Yet for problems involving large output domains (i.e., the state space for each variable is large), standard LP relaxations can easily give rise to a large number of variables and constraints which are beyond the limit of existing optimization algorithms. In this paper, we introduce an effective MAP inference method for problems with large output domains. The method builds upon alternating minimization of an Augmented Lagrangian that exploits the sparsity of messages through greedy optimization techniques. A key feature of our greedy approach is to introduce variables in an on-demand manner with a pre-built data structure over local factors. This results in a single-loop algorithm of sublinear cost per iteration and O(log(1/epsilon))-type iteration complexity to achieve epsilon sub-optimality. In addition, we introduce a variant of GDMM for binary MAP inference problems with a large number of factors. Empirically, the proposed algorithms demonstrate orders of magnitude speedup over state-of-the-art MAP inference techniques on MAP inference problems including Segmentation, Alignment, Protein Folding, Graph Matching, and Pairwise-Interacted Multilabel Prediction.
APA
Huang, X., Yen, I.E., Zhang, R., Huang, Q., Ravikumar, P. & Dhillon, I.. (2017). Greedy Direction Method of Multiplier for MAP Inference of Large Output Domain. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in PMLR 54:1550-1559

Related Material