A Contraction Approach to Model-based Reinforcement Learning

Ting-Han Fan, Peter Ramadge
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:325-333, 2021.

Abstract

Despite its experimental success, Model-based Reinforcement Learning still lacks a complete theoretical understanding. To this end, we analyze the error in the cumulative reward using a contraction approach. We consider both stochastic and deterministic state transitions for continuous (non-discrete) state and action spaces. This approach doesn’t require strong assumptions and can recover the typical quadratic error to the horizon. We prove that branched rollouts can reduce this error and are essential for deterministic transitions to have a Bellman contraction. Our analysis of policy mismatch error also applies to Imitation Learning. In this case, we show that GAN-type learning has an advantage over Behavioral Cloning when its discriminator is well-trained.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-fan21a, title = { A Contraction Approach to Model-based Reinforcement Learning }, author = {Fan, Ting-Han and Ramadge, Peter}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {325--333}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/fan21a/fan21a.pdf}, url = {https://proceedings.mlr.press/v130/fan21a.html}, abstract = { Despite its experimental success, Model-based Reinforcement Learning still lacks a complete theoretical understanding. To this end, we analyze the error in the cumulative reward using a contraction approach. We consider both stochastic and deterministic state transitions for continuous (non-discrete) state and action spaces. This approach doesn’t require strong assumptions and can recover the typical quadratic error to the horizon. We prove that branched rollouts can reduce this error and are essential for deterministic transitions to have a Bellman contraction. Our analysis of policy mismatch error also applies to Imitation Learning. In this case, we show that GAN-type learning has an advantage over Behavioral Cloning when its discriminator is well-trained. } }
Endnote
%0 Conference Paper %T A Contraction Approach to Model-based Reinforcement Learning %A Ting-Han Fan %A Peter Ramadge %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-fan21a %I PMLR %P 325--333 %U https://proceedings.mlr.press/v130/fan21a.html %V 130 %X Despite its experimental success, Model-based Reinforcement Learning still lacks a complete theoretical understanding. To this end, we analyze the error in the cumulative reward using a contraction approach. We consider both stochastic and deterministic state transitions for continuous (non-discrete) state and action spaces. This approach doesn’t require strong assumptions and can recover the typical quadratic error to the horizon. We prove that branched rollouts can reduce this error and are essential for deterministic transitions to have a Bellman contraction. Our analysis of policy mismatch error also applies to Imitation Learning. In this case, we show that GAN-type learning has an advantage over Behavioral Cloning when its discriminator is well-trained.
APA
Fan, T. & Ramadge, P.. (2021). A Contraction Approach to Model-based Reinforcement Learning . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:325-333 Available from https://proceedings.mlr.press/v130/fan21a.html.

Related Material