Blended Conditonal Gradients

Gábor Braun, Sebastian Pokutta, Dan Tu, Stephen Wright
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:735-743, 2019.

Abstract

We present a blended conditional gradient approach for minimizing a smooth convex function over a polytope P, combining the Frank{–}Wolfe algorithm (also called conditional gradient) with gradient-based steps, different from away steps and pairwise steps, but still achieving linear convergence for strongly convex functions, along with good practical performance. Our approach retains all favorable properties of conditional gradient algorithms, notably avoidance of projections onto P and maintenance of iterates as sparse convex combinations of a limited number of extreme points of P. The algorithm is lazy, making use of inexpensive inexact solutions of the linear programming subproblem that characterizes the conditional gradient approach. It decreases measures of optimality (primal and dual gaps) rapidly, both in the number of iterations and in wall-clock time, outperforming even the lazy conditional gradient algorithms of Braun et al. 2017. We also present a streamlined version of the algorithm that applies when P is the probability simplex.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-braun19a, title = {Blended Conditonal Gradients}, author = {Braun, G{\'a}bor and Pokutta, Sebastian and Tu, Dan and Wright, Stephen}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {735--743}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/braun19a/braun19a.pdf}, url = {https://proceedings.mlr.press/v97/braun19a.html}, abstract = {We present a blended conditional gradient approach for minimizing a smooth convex function over a polytope P, combining the Frank{–}Wolfe algorithm (also called conditional gradient) with gradient-based steps, different from away steps and pairwise steps, but still achieving linear convergence for strongly convex functions, along with good practical performance. Our approach retains all favorable properties of conditional gradient algorithms, notably avoidance of projections onto P and maintenance of iterates as sparse convex combinations of a limited number of extreme points of P. The algorithm is lazy, making use of inexpensive inexact solutions of the linear programming subproblem that characterizes the conditional gradient approach. It decreases measures of optimality (primal and dual gaps) rapidly, both in the number of iterations and in wall-clock time, outperforming even the lazy conditional gradient algorithms of Braun et al. 2017. We also present a streamlined version of the algorithm that applies when P is the probability simplex.} }
Endnote
%0 Conference Paper %T Blended Conditonal Gradients %A Gábor Braun %A Sebastian Pokutta %A Dan Tu %A Stephen Wright %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-braun19a %I PMLR %P 735--743 %U https://proceedings.mlr.press/v97/braun19a.html %V 97 %X We present a blended conditional gradient approach for minimizing a smooth convex function over a polytope P, combining the Frank{–}Wolfe algorithm (also called conditional gradient) with gradient-based steps, different from away steps and pairwise steps, but still achieving linear convergence for strongly convex functions, along with good practical performance. Our approach retains all favorable properties of conditional gradient algorithms, notably avoidance of projections onto P and maintenance of iterates as sparse convex combinations of a limited number of extreme points of P. The algorithm is lazy, making use of inexpensive inexact solutions of the linear programming subproblem that characterizes the conditional gradient approach. It decreases measures of optimality (primal and dual gaps) rapidly, both in the number of iterations and in wall-clock time, outperforming even the lazy conditional gradient algorithms of Braun et al. 2017. We also present a streamlined version of the algorithm that applies when P is the probability simplex.
APA
Braun, G., Pokutta, S., Tu, D. & Wright, S.. (2019). Blended Conditonal Gradients. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:735-743 Available from https://proceedings.mlr.press/v97/braun19a.html.

Related Material