Almost surely constrained convex optimization

Olivier Fercoq, Ahmet Alacaoglu, Ion Necoara, Volkan Cevher
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1910-1919, 2019.

Abstract

We propose a stochastic gradient framework for solving stochastic composite convex optimization problems with (possibly) infinite number of linear inclusion constraints that need to be satisfied almost surely. We use smoothing and homotopy techniques to handle constraints without the need for matrix-valued projections. We show for our stochastic gradient algorithm $\mathcal{O}(\log(k)/\sqrt{k})$ convergence rate for general convex objectives and $\mathcal{O}(\log(k)/k)$ convergence rate for restricted strongly convex objectives. These rates are known to be optimal up to logarithmic factor, even without constraints. We conduct numerical experiments on basis pursuit, hard margin support vector machines and portfolio optimization problems and show that our algorithm achieves state-of-the-art practical performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-fercoq19a, title = {Almost surely constrained convex optimization}, author = {Fercoq, Olivier and Alacaoglu, Ahmet and Necoara, Ion and Cevher, Volkan}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1910--1919}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/fercoq19a/fercoq19a.pdf}, url = {https://proceedings.mlr.press/v97/fercoq19a.html}, abstract = {We propose a stochastic gradient framework for solving stochastic composite convex optimization problems with (possibly) infinite number of linear inclusion constraints that need to be satisfied almost surely. We use smoothing and homotopy techniques to handle constraints without the need for matrix-valued projections. We show for our stochastic gradient algorithm $\mathcal{O}(\log(k)/\sqrt{k})$ convergence rate for general convex objectives and $\mathcal{O}(\log(k)/k)$ convergence rate for restricted strongly convex objectives. These rates are known to be optimal up to logarithmic factor, even without constraints. We conduct numerical experiments on basis pursuit, hard margin support vector machines and portfolio optimization problems and show that our algorithm achieves state-of-the-art practical performance.} }
Endnote
%0 Conference Paper %T Almost surely constrained convex optimization %A Olivier Fercoq %A Ahmet Alacaoglu %A Ion Necoara %A Volkan Cevher %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-fercoq19a %I PMLR %P 1910--1919 %U https://proceedings.mlr.press/v97/fercoq19a.html %V 97 %X We propose a stochastic gradient framework for solving stochastic composite convex optimization problems with (possibly) infinite number of linear inclusion constraints that need to be satisfied almost surely. We use smoothing and homotopy techniques to handle constraints without the need for matrix-valued projections. We show for our stochastic gradient algorithm $\mathcal{O}(\log(k)/\sqrt{k})$ convergence rate for general convex objectives and $\mathcal{O}(\log(k)/k)$ convergence rate for restricted strongly convex objectives. These rates are known to be optimal up to logarithmic factor, even without constraints. We conduct numerical experiments on basis pursuit, hard margin support vector machines and portfolio optimization problems and show that our algorithm achieves state-of-the-art practical performance.
APA
Fercoq, O., Alacaoglu, A., Necoara, I. & Cevher, V.. (2019). Almost surely constrained convex optimization. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1910-1919 Available from https://proceedings.mlr.press/v97/fercoq19a.html.

Related Material