Graph Cuts Always Find a Global Optimum for Potts Models (With a Catch)

Hunter Lang, David Sontag, Aravindan Vijayaraghavan
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:5990-5999, 2021.

Abstract

We prove that the alpha-expansion algorithm for MAP inference always returns a globally optimal assignment for Markov Random Fields with Potts pairwise potentials, with a catch: the returned assignment is only guaranteed to be optimal for an instance within a small perturbation of the original problem instance. In other words, all local minima with respect to expansion moves are global minima to slightly perturbed versions of the problem. On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance. We design an algorithm that can certify whether this is the case in practice. On several MAP inference problem instances from computer vision, this algorithm certifies that MAP solutions to all of these perturbations are very close to solutions of the original instance. These results taken together give a cohesive explanation for the good performance of "graph cuts" algorithms in practice. Every local expansion minimum is a global minimum in a small perturbation of the problem, and all of these global minima are close to the original solution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-lang21a, title = {Graph Cuts Always Find a Global Optimum for Potts Models (With a Catch)}, author = {Lang, Hunter and Sontag, David and Vijayaraghavan, Aravindan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {5990--5999}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/lang21a/lang21a.pdf}, url = {https://proceedings.mlr.press/v139/lang21a.html}, abstract = {We prove that the alpha-expansion algorithm for MAP inference always returns a globally optimal assignment for Markov Random Fields with Potts pairwise potentials, with a catch: the returned assignment is only guaranteed to be optimal for an instance within a small perturbation of the original problem instance. In other words, all local minima with respect to expansion moves are global minima to slightly perturbed versions of the problem. On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance. We design an algorithm that can certify whether this is the case in practice. On several MAP inference problem instances from computer vision, this algorithm certifies that MAP solutions to all of these perturbations are very close to solutions of the original instance. These results taken together give a cohesive explanation for the good performance of "graph cuts" algorithms in practice. Every local expansion minimum is a global minimum in a small perturbation of the problem, and all of these global minima are close to the original solution.} }
Endnote
%0 Conference Paper %T Graph Cuts Always Find a Global Optimum for Potts Models (With a Catch) %A Hunter Lang %A David Sontag %A Aravindan Vijayaraghavan %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-lang21a %I PMLR %P 5990--5999 %U https://proceedings.mlr.press/v139/lang21a.html %V 139 %X We prove that the alpha-expansion algorithm for MAP inference always returns a globally optimal assignment for Markov Random Fields with Potts pairwise potentials, with a catch: the returned assignment is only guaranteed to be optimal for an instance within a small perturbation of the original problem instance. In other words, all local minima with respect to expansion moves are global minima to slightly perturbed versions of the problem. On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance. We design an algorithm that can certify whether this is the case in practice. On several MAP inference problem instances from computer vision, this algorithm certifies that MAP solutions to all of these perturbations are very close to solutions of the original instance. These results taken together give a cohesive explanation for the good performance of "graph cuts" algorithms in practice. Every local expansion minimum is a global minimum in a small perturbation of the problem, and all of these global minima are close to the original solution.
APA
Lang, H., Sontag, D. & Vijayaraghavan, A.. (2021). Graph Cuts Always Find a Global Optimum for Potts Models (With a Catch). Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:5990-5999 Available from https://proceedings.mlr.press/v139/lang21a.html.

Related Material