Deep Value Model Predictive Control

David Hoeller, Farbod Farshidian, Marco Hutter
Proceedings of the Conference on Robot Learning, PMLR 100:990-1004, 2020.

Abstract

In this paper, we introduce an actor-critic algorithm called Deep Value Model Predictive Control (DMPC), which combines model-based trajectory optimization with value function estimation. The DMPC actor is a Model Predictive Control (MPC) optimizer with an objective function defined in terms of a value function estimated by the critic. We show that our MPC actor is an importance sampler, which minimizes an upper bound of the cross-entropy to the state distribution of the optimal sampling policy. In our experiments with a Ballbot system, we show that our algorithm can work with sparse and binary reward signals to efficiently solve obstacle avoidance and target reaching tasks. Compared to previous work, we show that including the value function in the running cost of the trajectory optimizer speeds up the convergence. We also discuss the necessary strategies to robustify the algorithm in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-hoeller20a, title = {Deep Value Model Predictive Control}, author = {Hoeller, David and Farshidian, Farbod and Hutter, Marco}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {990--1004}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/hoeller20a/hoeller20a.pdf}, url = {https://proceedings.mlr.press/v100/hoeller20a.html}, abstract = {In this paper, we introduce an actor-critic algorithm called Deep Value Model Predictive Control (DMPC), which combines model-based trajectory optimization with value function estimation. The DMPC actor is a Model Predictive Control (MPC) optimizer with an objective function defined in terms of a value function estimated by the critic. We show that our MPC actor is an importance sampler, which minimizes an upper bound of the cross-entropy to the state distribution of the optimal sampling policy. In our experiments with a Ballbot system, we show that our algorithm can work with sparse and binary reward signals to efficiently solve obstacle avoidance and target reaching tasks. Compared to previous work, we show that including the value function in the running cost of the trajectory optimizer speeds up the convergence. We also discuss the necessary strategies to robustify the algorithm in practice.} }
Endnote
%0 Conference Paper %T Deep Value Model Predictive Control %A David Hoeller %A Farbod Farshidian %A Marco Hutter %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-hoeller20a %I PMLR %P 990--1004 %U https://proceedings.mlr.press/v100/hoeller20a.html %V 100 %X In this paper, we introduce an actor-critic algorithm called Deep Value Model Predictive Control (DMPC), which combines model-based trajectory optimization with value function estimation. The DMPC actor is a Model Predictive Control (MPC) optimizer with an objective function defined in terms of a value function estimated by the critic. We show that our MPC actor is an importance sampler, which minimizes an upper bound of the cross-entropy to the state distribution of the optimal sampling policy. In our experiments with a Ballbot system, we show that our algorithm can work with sparse and binary reward signals to efficiently solve obstacle avoidance and target reaching tasks. Compared to previous work, we show that including the value function in the running cost of the trajectory optimizer speeds up the convergence. We also discuss the necessary strategies to robustify the algorithm in practice.
APA
Hoeller, D., Farshidian, F. & Hutter, M.. (2020). Deep Value Model Predictive Control. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:990-1004 Available from https://proceedings.mlr.press/v100/hoeller20a.html.

Related Material