Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

Belhal Karimi, Blazej Miasojedow, Eric Moulines, Hoi-To Wai
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1944-1974, 2019.

Abstract

Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-karimi19a, title = {Non-asymptotic Analysis of Biased Stochastic Approximation Scheme}, author = {Karimi, Belhal and Miasojedow, Blazej and Moulines, Eric and Wai, Hoi-To}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {1944--1974}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/karimi19a/karimi19a.pdf}, url = {https://proceedings.mlr.press/v99/karimi19a.html}, abstract = {Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.} }
Endnote
%0 Conference Paper %T Non-asymptotic Analysis of Biased Stochastic Approximation Scheme %A Belhal Karimi %A Blazej Miasojedow %A Eric Moulines %A Hoi-To Wai %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-karimi19a %I PMLR %P 1944--1974 %U https://proceedings.mlr.press/v99/karimi19a.html %V 99 %X Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.
APA
Karimi, B., Miasojedow, B., Moulines, E. & Wai, H.. (2019). Non-asymptotic Analysis of Biased Stochastic Approximation Scheme. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:1944-1974 Available from https://proceedings.mlr.press/v99/karimi19a.html.

Related Material