The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities

Waïss Azizian, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:326-358, 2021.

Abstract

In this paper, we analyze the local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning. Our analysis reveals an intricate relation between the algorithm’s rate of convergence and the local geometry induced by the method’s underlying Bregman function. We quantify this relation by means of the Legendre exponent, a notion that we introduce to measure the growth rate of the Bregman divergence relative to the ambient norm near a solution. We show that this exponent determines both the optimal step-size policy of the algorithm and the optimal rates attained, explaining in this way the differences observed for some popular Bregman functions (Euclidean projection, negative entropy, fractional power, etc.).

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-azizian21a, title = {The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities}, author = {Azizian, Wa\"iss and Iutzeler, Franck and Malick, J\'er\^ome and Mertikopoulos, Panayotis}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {326--358}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/azizian21a/azizian21a.pdf}, url = {https://proceedings.mlr.press/v134/azizian21a.html}, abstract = {In this paper, we analyze the local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning. Our analysis reveals an intricate relation between the algorithm’s rate of convergence and the local geometry induced by the method’s underlying Bregman function. We quantify this relation by means of the Legendre exponent, a notion that we introduce to measure the growth rate of the Bregman divergence relative to the ambient norm near a solution. We show that this exponent determines both the optimal step-size policy of the algorithm and the optimal rates attained, explaining in this way the differences observed for some popular Bregman functions (Euclidean projection, negative entropy, fractional power, etc.).} }
Endnote
%0 Conference Paper %T The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities %A Waïss Azizian %A Franck Iutzeler %A Jérôme Malick %A Panayotis Mertikopoulos %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-azizian21a %I PMLR %P 326--358 %U https://proceedings.mlr.press/v134/azizian21a.html %V 134 %X In this paper, we analyze the local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning. Our analysis reveals an intricate relation between the algorithm’s rate of convergence and the local geometry induced by the method’s underlying Bregman function. We quantify this relation by means of the Legendre exponent, a notion that we introduce to measure the growth rate of the Bregman divergence relative to the ambient norm near a solution. We show that this exponent determines both the optimal step-size policy of the algorithm and the optimal rates attained, explaining in this way the differences observed for some popular Bregman functions (Euclidean projection, negative entropy, fractional power, etc.).
APA
Azizian, W., Iutzeler, F., Malick, J. & Mertikopoulos, P.. (2021). The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:326-358 Available from https://proceedings.mlr.press/v134/azizian21a.html.

Related Material