Lipschitz Continuity in Model-based Reinforcement Learning

Kavosh Asadi, Dipendra Misra, Michael Littman
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:264-273, 2018.

Abstract

We examine the impact of learning Lipschitz continuous models in the context of model-based reinforcement learning. We provide a novel bound on multi-step prediction error of Lipschitz models where we quantify the error using the Wasserstein metric. We go on to prove an error bound for the value-function estimate arising from Lipschitz models and show that the estimated value function is itself Lipschitz. We conclude with empirical results that show the benefits of controlling the Lipschitz constant of neural-network models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-asadi18a, title = {{L}ipschitz Continuity in Model-based Reinforcement Learning}, author = {Asadi, Kavosh and Misra, Dipendra and Littman, Michael}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {264--273}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/asadi18a/asadi18a.pdf}, url = {https://proceedings.mlr.press/v80/asadi18a.html}, abstract = {We examine the impact of learning Lipschitz continuous models in the context of model-based reinforcement learning. We provide a novel bound on multi-step prediction error of Lipschitz models where we quantify the error using the Wasserstein metric. We go on to prove an error bound for the value-function estimate arising from Lipschitz models and show that the estimated value function is itself Lipschitz. We conclude with empirical results that show the benefits of controlling the Lipschitz constant of neural-network models.} }
Endnote
%0 Conference Paper %T Lipschitz Continuity in Model-based Reinforcement Learning %A Kavosh Asadi %A Dipendra Misra %A Michael Littman %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-asadi18a %I PMLR %P 264--273 %U https://proceedings.mlr.press/v80/asadi18a.html %V 80 %X We examine the impact of learning Lipschitz continuous models in the context of model-based reinforcement learning. We provide a novel bound on multi-step prediction error of Lipschitz models where we quantify the error using the Wasserstein metric. We go on to prove an error bound for the value-function estimate arising from Lipschitz models and show that the estimated value function is itself Lipschitz. We conclude with empirical results that show the benefits of controlling the Lipschitz constant of neural-network models.
APA
Asadi, K., Misra, D. & Littman, M.. (2018). Lipschitz Continuity in Model-based Reinforcement Learning. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:264-273 Available from https://proceedings.mlr.press/v80/asadi18a.html.

Related Material