A Reduction from Reinforcement Learning to No-Regret Online Learning

Ching-An Cheng, Remi Tachet Combes, Byron Boots, Geoff Gordon
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3514-3524, 2020.

Abstract

We present a reduction from reinforcement learning (RL) to no-regret online learning based on the saddle-point formulation of RL, by which "any" online algorithm with sublinear regret can generate policies with provable performance guarantees. This new perspective decouples the RL problem into two parts: regret minimization and function approximation. The first part admits a standard online-learning analysis, and the second part can be quantified independently of the learning algorithm. Therefore, the proposed reduction can be used as a tool to systematically design new RL algorithms. We demonstrate this idea by devising a simple RL algorithm based on mirror descent and the generative-model oracle. For any $\gamma$-discounted tabular RL problem, with probability at least $1-\delta$, it learns an $\epsilon$-optimal policy using at most $\tilde{O}\left(\frac{|\SS||Å|\log(\frac{1}{\delta})}{(1-\gamma)^4\epsilon^2}\right)$ samples. Furthermore, this algorithm admits a direct extension to linearly parameterized function approximators for large-scale applications, with computation and sample complexities independent of $|\SS|$,$|Å|$, though at the cost of potential approximation bias.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-cheng20b, title = {A Reduction from Reinforcement Learning to No-Regret Online Learning}, author = {Cheng, Ching-An and des Combes, Remi Tachet and Boots, Byron and Gordon, Geoff}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3514--3524}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/cheng20b/cheng20b.pdf}, url = {https://proceedings.mlr.press/v108/cheng20b.html}, abstract = {We present a reduction from reinforcement learning (RL) to no-regret online learning based on the saddle-point formulation of RL, by which "any" online algorithm with sublinear regret can generate policies with provable performance guarantees. This new perspective decouples the RL problem into two parts: regret minimization and function approximation. The first part admits a standard online-learning analysis, and the second part can be quantified independently of the learning algorithm. Therefore, the proposed reduction can be used as a tool to systematically design new RL algorithms. We demonstrate this idea by devising a simple RL algorithm based on mirror descent and the generative-model oracle. For any $\gamma$-discounted tabular RL problem, with probability at least $1-\delta$, it learns an $\epsilon$-optimal policy using at most $\tilde{O}\left(\frac{|\SS||Å|\log(\frac{1}{\delta})}{(1-\gamma)^4\epsilon^2}\right)$ samples. Furthermore, this algorithm admits a direct extension to linearly parameterized function approximators for large-scale applications, with computation and sample complexities independent of $|\SS|$,$|Å|$, though at the cost of potential approximation bias.} }
Endnote
%0 Conference Paper %T A Reduction from Reinforcement Learning to No-Regret Online Learning %A Ching-An Cheng %A Remi Tachet Combes %A Byron Boots %A Geoff Gordon %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-cheng20b %I PMLR %P 3514--3524 %U https://proceedings.mlr.press/v108/cheng20b.html %V 108 %X We present a reduction from reinforcement learning (RL) to no-regret online learning based on the saddle-point formulation of RL, by which "any" online algorithm with sublinear regret can generate policies with provable performance guarantees. This new perspective decouples the RL problem into two parts: regret minimization and function approximation. The first part admits a standard online-learning analysis, and the second part can be quantified independently of the learning algorithm. Therefore, the proposed reduction can be used as a tool to systematically design new RL algorithms. We demonstrate this idea by devising a simple RL algorithm based on mirror descent and the generative-model oracle. For any $\gamma$-discounted tabular RL problem, with probability at least $1-\delta$, it learns an $\epsilon$-optimal policy using at most $\tilde{O}\left(\frac{|\SS||Å|\log(\frac{1}{\delta})}{(1-\gamma)^4\epsilon^2}\right)$ samples. Furthermore, this algorithm admits a direct extension to linearly parameterized function approximators for large-scale applications, with computation and sample complexities independent of $|\SS|$,$|Å|$, though at the cost of potential approximation bias.
APA
Cheng, C., Combes, R.T., Boots, B. & Gordon, G.. (2020). A Reduction from Reinforcement Learning to No-Regret Online Learning. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3514-3524 Available from https://proceedings.mlr.press/v108/cheng20b.html.

Related Material