Fast Convergence of Online Pairwise Learning Algorithms

Martin Boissier, Siwei Lyu, Yiming Ying, Ding-Xuan Zhou
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:204-212, 2016.

Abstract

Pairwise learning usually refers to a learning task which involves a loss function depending on pairs of examples, among which most notable ones are bipartite ranking, metric learning and AUC maximization. In this paper, we focus on online learning algorithms for pairwise learning problems without strong convexity, for which all previously known algorithms achieve a convergence rate of \mathcalO(1/\sqrtT) after T iterations. In particular, we study an online learning algorithm for pairwise learning with a least-square loss function in an unconstrained setting. We prove that the convergence of its last iterate can converge to the desired minimizer at a rate arbitrarily close to \mathcalO(1/T) up to logarithmic factor. The rates for this algorithm are established in high probability under the assumptions of polynomially decaying step sizes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-boissier16, title = {Fast Convergence of Online Pairwise Learning Algorithms}, author = {Boissier, Martin and Lyu, Siwei and Ying, Yiming and Zhou, Ding-Xuan}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {204--212}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/boissier16.pdf}, url = {https://proceedings.mlr.press/v51/boissier16.html}, abstract = {Pairwise learning usually refers to a learning task which involves a loss function depending on pairs of examples, among which most notable ones are bipartite ranking, metric learning and AUC maximization. In this paper, we focus on online learning algorithms for pairwise learning problems without strong convexity, for which all previously known algorithms achieve a convergence rate of \mathcalO(1/\sqrtT) after T iterations. In particular, we study an online learning algorithm for pairwise learning with a least-square loss function in an unconstrained setting. We prove that the convergence of its last iterate can converge to the desired minimizer at a rate arbitrarily close to \mathcalO(1/T) up to logarithmic factor. The rates for this algorithm are established in high probability under the assumptions of polynomially decaying step sizes.} }
Endnote
%0 Conference Paper %T Fast Convergence of Online Pairwise Learning Algorithms %A Martin Boissier %A Siwei Lyu %A Yiming Ying %A Ding-Xuan Zhou %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-boissier16 %I PMLR %P 204--212 %U https://proceedings.mlr.press/v51/boissier16.html %V 51 %X Pairwise learning usually refers to a learning task which involves a loss function depending on pairs of examples, among which most notable ones are bipartite ranking, metric learning and AUC maximization. In this paper, we focus on online learning algorithms for pairwise learning problems without strong convexity, for which all previously known algorithms achieve a convergence rate of \mathcalO(1/\sqrtT) after T iterations. In particular, we study an online learning algorithm for pairwise learning with a least-square loss function in an unconstrained setting. We prove that the convergence of its last iterate can converge to the desired minimizer at a rate arbitrarily close to \mathcalO(1/T) up to logarithmic factor. The rates for this algorithm are established in high probability under the assumptions of polynomially decaying step sizes.
RIS
TY - CPAPER TI - Fast Convergence of Online Pairwise Learning Algorithms AU - Martin Boissier AU - Siwei Lyu AU - Yiming Ying AU - Ding-Xuan Zhou BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-boissier16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 204 EP - 212 L1 - http://proceedings.mlr.press/v51/boissier16.pdf UR - https://proceedings.mlr.press/v51/boissier16.html AB - Pairwise learning usually refers to a learning task which involves a loss function depending on pairs of examples, among which most notable ones are bipartite ranking, metric learning and AUC maximization. In this paper, we focus on online learning algorithms for pairwise learning problems without strong convexity, for which all previously known algorithms achieve a convergence rate of \mathcalO(1/\sqrtT) after T iterations. In particular, we study an online learning algorithm for pairwise learning with a least-square loss function in an unconstrained setting. We prove that the convergence of its last iterate can converge to the desired minimizer at a rate arbitrarily close to \mathcalO(1/T) up to logarithmic factor. The rates for this algorithm are established in high probability under the assumptions of polynomially decaying step sizes. ER -
APA
Boissier, M., Lyu, S., Ying, Y. & Zhou, D.. (2016). Fast Convergence of Online Pairwise Learning Algorithms. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:204-212 Available from https://proceedings.mlr.press/v51/boissier16.html.

Related Material