Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

Alina Ene, Huy Nguyen
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:787-795, 2015.

Abstract

Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of “simple” functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster \emphlinear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-ene15, title = {Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions}, author = {Ene, Alina and Nguyen, Huy}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {787--795}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/ene15.pdf}, url = {https://proceedings.mlr.press/v37/ene15.html}, abstract = {Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of “simple” functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster \emphlinear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations.} }
Endnote
%0 Conference Paper %T Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions %A Alina Ene %A Huy Nguyen %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-ene15 %I PMLR %P 787--795 %U https://proceedings.mlr.press/v37/ene15.html %V 37 %X Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of “simple” functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster \emphlinear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations.
RIS
TY - CPAPER TI - Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions AU - Alina Ene AU - Huy Nguyen BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-ene15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 787 EP - 795 L1 - http://proceedings.mlr.press/v37/ene15.pdf UR - https://proceedings.mlr.press/v37/ene15.html AB - Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of “simple” functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster \emphlinear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations. ER -
APA
Ene, A. & Nguyen, H.. (2015). Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:787-795 Available from https://proceedings.mlr.press/v37/ene15.html.

Related Material