Greedy Direction Method of Multiplier for MAP Inference of Large Output Domain
[edit]
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:15501559, 2017.
Abstract
MaximumaPosteriori (MAP) inference lies at the heart of Graphical Models and Structured Prediction. Despite the intractability of exact MAP inference, approximated methods based on LP relaxations have exhibited superior performance across a wide range of applications. Yet for problems involving large output domains (i.e., the state space for each variable is large), standard LP relaxations can easily give rise to a large number of variables and constraints which are beyond the limit of existing optimization algorithms. In this paper, we introduce an effective MAP inference method for problems with large output domains. The method builds upon alternating minimization of an Augmented Lagrangian that exploits the sparsity of messages through greedy optimization techniques. A key feature of our greedy approach is to introduce variables in an ondemand manner with a prebuilt data structure over local factors. This results in a singleloop algorithm of sublinear cost per iteration and O(log(1/epsilon))type iteration complexity to achieve epsilon suboptimality. In addition, we introduce a variant of GDMM for binary MAP inference problems with a large number of factors. Empirically, the proposed algorithms demonstrate orders of magnitude speedup over stateoftheart MAP inference techniques on MAP inference problems including Segmentation, Alignment, Protein Folding, Graph Matching, and PairwiseInteracted Multilabel Prediction.
Related Material


