Model-Predictive Control via Cross-Entropy and Gradient-Based Optimization

Homanga Bharadhwaj, Kevin Xie, Florian Shkurti
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:277-286, 2020.

Abstract

Recent works in high-dimensional model-predictive control and model-based reinforcement learning with learned dynamics and reward models have resorted to population-based optimization methods, such as the Cross-Entropy Method (CEM), for planning a sequence of actions. To decide on an action to take, CEM conducts a search for the action sequence with the highest return according to the learned dynamics model and reward. Action sequences are typically randomly sampled from an unconditional Gaussian distribution and evaluated. This distribution is iteratively updated towards action sequences with higher returns. However, sampling and simulating unconditional action sequences can be very inefficient (especially from a diagonal Gaussian distribution and for high dimensional action spaces). An alternative line of approaches optimize action sequences directly via gradient descent but are prone to local optima. We propose a method to solve this planning problem by interleaving CEM and gradient descent steps in optimizing the action sequence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v120-bharadhwaj20a, title = {Model-Predictive Control via Cross-Entropy and Gradient-Based Optimization}, author = {Bharadhwaj, Homanga and Xie, Kevin and Shkurti, Florian}, booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control}, pages = {277--286}, year = {2020}, editor = {Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie}, volume = {120}, series = {Proceedings of Machine Learning Research}, month = {10--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v120/bharadhwaj20a/bharadhwaj20a.pdf}, url = {https://proceedings.mlr.press/v120/bharadhwaj20a.html}, abstract = {Recent works in high-dimensional model-predictive control and model-based reinforcement learning with learned dynamics and reward models have resorted to population-based optimization methods, such as the Cross-Entropy Method (CEM), for planning a sequence of actions. To decide on an action to take, CEM conducts a search for the action sequence with the highest return according to the learned dynamics model and reward. Action sequences are typically randomly sampled from an unconditional Gaussian distribution and evaluated. This distribution is iteratively updated towards action sequences with higher returns. However, sampling and simulating unconditional action sequences can be very inefficient (especially from a diagonal Gaussian distribution and for high dimensional action spaces). An alternative line of approaches optimize action sequences directly via gradient descent but are prone to local optima. We propose a method to solve this planning problem by interleaving CEM and gradient descent steps in optimizing the action sequence.} }
Endnote
%0 Conference Paper %T Model-Predictive Control via Cross-Entropy and Gradient-Based Optimization %A Homanga Bharadhwaj %A Kevin Xie %A Florian Shkurti %B Proceedings of the 2nd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2020 %E Alexandre M. Bayen %E Ali Jadbabaie %E George Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire Tomlin %E Melanie Zeilinger %F pmlr-v120-bharadhwaj20a %I PMLR %P 277--286 %U https://proceedings.mlr.press/v120/bharadhwaj20a.html %V 120 %X Recent works in high-dimensional model-predictive control and model-based reinforcement learning with learned dynamics and reward models have resorted to population-based optimization methods, such as the Cross-Entropy Method (CEM), for planning a sequence of actions. To decide on an action to take, CEM conducts a search for the action sequence with the highest return according to the learned dynamics model and reward. Action sequences are typically randomly sampled from an unconditional Gaussian distribution and evaluated. This distribution is iteratively updated towards action sequences with higher returns. However, sampling and simulating unconditional action sequences can be very inefficient (especially from a diagonal Gaussian distribution and for high dimensional action spaces). An alternative line of approaches optimize action sequences directly via gradient descent but are prone to local optima. We propose a method to solve this planning problem by interleaving CEM and gradient descent steps in optimizing the action sequence.
APA
Bharadhwaj, H., Xie, K. & Shkurti, F.. (2020). Model-Predictive Control via Cross-Entropy and Gradient-Based Optimization. Proceedings of the 2nd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 120:277-286 Available from https://proceedings.mlr.press/v120/bharadhwaj20a.html.

Related Material