Value Iteration in Continuous Actions, States and Time

Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:7224-7234, 2021.

Abstract

Classical value iteration approaches are not applicable to environments with continuous states and actions. For such environments the states and actions must be discretized, which leads to an exponential increase in computational complexity. In this paper, we propose continuous fitted value iteration (cFVI). This algorithm enables dynamic programming for continuous states and actions with a known dynamics model. Exploiting the continuous time formulation, the optimal policy can be derived for non-linear control-affine dynamics. This closed-form solution enables the efficient extension of value iteration to continuous environments. We show in non-linear control experiments that the dynamic programming solution obtains the same quantitative performance as deep reinforcement learning methods in simulation but excels when transferred to the physical system.The policy obtained by cFVI is more robust to changes in the dynamics despite using only a deterministic model and without explicitly incorporating robustness in the optimization

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-lutter21a, title = {Value Iteration in Continuous Actions, States and Time}, author = {Lutter, Michael and Mannor, Shie and Peters, Jan and Fox, Dieter and Garg, Animesh}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {7224--7234}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/lutter21a/lutter21a.pdf}, url = {https://proceedings.mlr.press/v139/lutter21a.html}, abstract = {Classical value iteration approaches are not applicable to environments with continuous states and actions. For such environments the states and actions must be discretized, which leads to an exponential increase in computational complexity. In this paper, we propose continuous fitted value iteration (cFVI). This algorithm enables dynamic programming for continuous states and actions with a known dynamics model. Exploiting the continuous time formulation, the optimal policy can be derived for non-linear control-affine dynamics. This closed-form solution enables the efficient extension of value iteration to continuous environments. We show in non-linear control experiments that the dynamic programming solution obtains the same quantitative performance as deep reinforcement learning methods in simulation but excels when transferred to the physical system.The policy obtained by cFVI is more robust to changes in the dynamics despite using only a deterministic model and without explicitly incorporating robustness in the optimization} }
Endnote
%0 Conference Paper %T Value Iteration in Continuous Actions, States and Time %A Michael Lutter %A Shie Mannor %A Jan Peters %A Dieter Fox %A Animesh Garg %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-lutter21a %I PMLR %P 7224--7234 %U https://proceedings.mlr.press/v139/lutter21a.html %V 139 %X Classical value iteration approaches are not applicable to environments with continuous states and actions. For such environments the states and actions must be discretized, which leads to an exponential increase in computational complexity. In this paper, we propose continuous fitted value iteration (cFVI). This algorithm enables dynamic programming for continuous states and actions with a known dynamics model. Exploiting the continuous time formulation, the optimal policy can be derived for non-linear control-affine dynamics. This closed-form solution enables the efficient extension of value iteration to continuous environments. We show in non-linear control experiments that the dynamic programming solution obtains the same quantitative performance as deep reinforcement learning methods in simulation but excels when transferred to the physical system.The policy obtained by cFVI is more robust to changes in the dynamics despite using only a deterministic model and without explicitly incorporating robustness in the optimization
APA
Lutter, M., Mannor, S., Peters, J., Fox, D. & Garg, A.. (2021). Value Iteration in Continuous Actions, States and Time. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:7224-7234 Available from https://proceedings.mlr.press/v139/lutter21a.html.

Related Material