SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning

Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew Johnson, Sergey Levine
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7444-7453, 2019.

Abstract

Model-based reinforcement learning (RL) has proven to be a data efficient approach for learning control tasks but is difficult to utilize in domains with complex observations such as images. In this paper, we present a method for learning representations that are suitable for iterative model-based policy improvement, even when the underlying dynamical system has complex dynamics and image observations, in that these representations are optimized for inferring simple dynamics and cost models given data from the current policy. This enables a model-based RL method based on the linear-quadratic regulator (LQR) to be used for systems with image observations. We evaluate our approach on a range of robotics tasks, including manipulation with a real-world robotic arm directly from images. We find that our method produces substantially better final performance than other model-based RL methods while being significantly more efficient than model-free RL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-zhang19m, title = {{SOLAR}: Deep Structured Representations for Model-Based Reinforcement Learning}, author = {Zhang, Marvin and Vikram, Sharad and Smith, Laura and Abbeel, Pieter and Johnson, Matthew and Levine, Sergey}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {7444--7453}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/zhang19m/zhang19m.pdf}, url = {https://proceedings.mlr.press/v97/zhang19m.html}, abstract = {Model-based reinforcement learning (RL) has proven to be a data efficient approach for learning control tasks but is difficult to utilize in domains with complex observations such as images. In this paper, we present a method for learning representations that are suitable for iterative model-based policy improvement, even when the underlying dynamical system has complex dynamics and image observations, in that these representations are optimized for inferring simple dynamics and cost models given data from the current policy. This enables a model-based RL method based on the linear-quadratic regulator (LQR) to be used for systems with image observations. We evaluate our approach on a range of robotics tasks, including manipulation with a real-world robotic arm directly from images. We find that our method produces substantially better final performance than other model-based RL methods while being significantly more efficient than model-free RL.} }
Endnote
%0 Conference Paper %T SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning %A Marvin Zhang %A Sharad Vikram %A Laura Smith %A Pieter Abbeel %A Matthew Johnson %A Sergey Levine %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-zhang19m %I PMLR %P 7444--7453 %U https://proceedings.mlr.press/v97/zhang19m.html %V 97 %X Model-based reinforcement learning (RL) has proven to be a data efficient approach for learning control tasks but is difficult to utilize in domains with complex observations such as images. In this paper, we present a method for learning representations that are suitable for iterative model-based policy improvement, even when the underlying dynamical system has complex dynamics and image observations, in that these representations are optimized for inferring simple dynamics and cost models given data from the current policy. This enables a model-based RL method based on the linear-quadratic regulator (LQR) to be used for systems with image observations. We evaluate our approach on a range of robotics tasks, including manipulation with a real-world robotic arm directly from images. We find that our method produces substantially better final performance than other model-based RL methods while being significantly more efficient than model-free RL.
APA
Zhang, M., Vikram, S., Smith, L., Abbeel, P., Johnson, M. & Levine, S.. (2019). SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:7444-7453 Available from https://proceedings.mlr.press/v97/zhang19m.html.

Related Material