Optimality of Approximate Inference Algorithms on Stable Instances

Hunter Lang, David Sontag, Aravindan Vijayaraghavan
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1157-1166, 2018.

Abstract

Approximate algorithms for structured prediction problems—such as LP relaxations and the popular α-expansion algorithm (Boykov et al. 2001)—typically far exceed their theoretical performance guarantees on real-world instances. These algorithms often find solutions that are very close to optimal. The goal of this paper is to partially explain the performance of α-expansion and an LP relaxation algorithm on MAP inference in Ferromagnetic Potts models (FPMs). Our main results give stability conditions under which these two algorithms provably recover the optimal MAP solution. These theoretical results complement numerous empirical observations of good performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-lang18a, title = {Optimality of Approximate Inference Algorithms on Stable Instances}, author = {Lang, Hunter and Sontag, David and Vijayaraghavan, Aravindan}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {1157--1166}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/lang18a/lang18a.pdf}, url = {https://proceedings.mlr.press/v84/lang18a.html}, abstract = {Approximate algorithms for structured prediction problems—such as LP relaxations and the popular α-expansion algorithm (Boykov et al. 2001)—typically far exceed their theoretical performance guarantees on real-world instances. These algorithms often find solutions that are very close to optimal. The goal of this paper is to partially explain the performance of α-expansion and an LP relaxation algorithm on MAP inference in Ferromagnetic Potts models (FPMs). Our main results give stability conditions under which these two algorithms provably recover the optimal MAP solution. These theoretical results complement numerous empirical observations of good performance.} }
Endnote
%0 Conference Paper %T Optimality of Approximate Inference Algorithms on Stable Instances %A Hunter Lang %A David Sontag %A Aravindan Vijayaraghavan %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-lang18a %I PMLR %P 1157--1166 %U https://proceedings.mlr.press/v84/lang18a.html %V 84 %X Approximate algorithms for structured prediction problems—such as LP relaxations and the popular α-expansion algorithm (Boykov et al. 2001)—typically far exceed their theoretical performance guarantees on real-world instances. These algorithms often find solutions that are very close to optimal. The goal of this paper is to partially explain the performance of α-expansion and an LP relaxation algorithm on MAP inference in Ferromagnetic Potts models (FPMs). Our main results give stability conditions under which these two algorithms provably recover the optimal MAP solution. These theoretical results complement numerous empirical observations of good performance.
APA
Lang, H., Sontag, D. & Vijayaraghavan, A.. (2018). Optimality of Approximate Inference Algorithms on Stable Instances. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:1157-1166 Available from https://proceedings.mlr.press/v84/lang18a.html.

Related Material