Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms

Jialei Wang, Lin Xiao
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:3694-3702, 2017.

Abstract

We consider empirical risk minimization of linear predictors with convex loss functions. Such problems can be reformulated as convex-concave saddle point problems and solved by primal-dual first-order algorithms. However, primal-dual algorithms often require explicit strongly convex regularization in order to obtain fast linear convergence, and the required dual proximal mapping may not admit closed-form or efficient solution. In this paper, we develop both batch and randomized primal-dual algorithms that can exploit strong convexity from data adaptively and are capable of achieving linear convergence even without regularization. We also present dual-free variants of adaptive primal-dual algorithms that do not need the dual proximal mapping, which are especially suitable for logistic regression.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-wang17l, title = {Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms}, author = {Jialei Wang and Lin Xiao}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {3694--3702}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/wang17l/wang17l.pdf}, url = {https://proceedings.mlr.press/v70/wang17l.html}, abstract = {We consider empirical risk minimization of linear predictors with convex loss functions. Such problems can be reformulated as convex-concave saddle point problems and solved by primal-dual first-order algorithms. However, primal-dual algorithms often require explicit strongly convex regularization in order to obtain fast linear convergence, and the required dual proximal mapping may not admit closed-form or efficient solution. In this paper, we develop both batch and randomized primal-dual algorithms that can exploit strong convexity from data adaptively and are capable of achieving linear convergence even without regularization. We also present dual-free variants of adaptive primal-dual algorithms that do not need the dual proximal mapping, which are especially suitable for logistic regression.} }
Endnote
%0 Conference Paper %T Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms %A Jialei Wang %A Lin Xiao %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-wang17l %I PMLR %P 3694--3702 %U https://proceedings.mlr.press/v70/wang17l.html %V 70 %X We consider empirical risk minimization of linear predictors with convex loss functions. Such problems can be reformulated as convex-concave saddle point problems and solved by primal-dual first-order algorithms. However, primal-dual algorithms often require explicit strongly convex regularization in order to obtain fast linear convergence, and the required dual proximal mapping may not admit closed-form or efficient solution. In this paper, we develop both batch and randomized primal-dual algorithms that can exploit strong convexity from data adaptively and are capable of achieving linear convergence even without regularization. We also present dual-free variants of adaptive primal-dual algorithms that do not need the dual proximal mapping, which are especially suitable for logistic regression.
APA
Wang, J. & Xiao, L.. (2017). Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:3694-3702 Available from https://proceedings.mlr.press/v70/wang17l.html.

Related Material