Self-Supervised Visual Planning with Temporal Skip Connections

Frederik Ebert, Chelsea Finn, Alex X. Lee, Sergey Levine
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:344-356, 2017.

Abstract

In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction. If a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location. However, in complex open-world scenarios, designing a representation for prediction is difficult. In this work, we instead aim to enable self-supervised robot learning through direct video prediction: instead of attempting to design a good representation, we directly predict what the robot will see next, and then use this model to achieve desired goals. A key challenge in video prediction for robotic manipulation is handling complex spatial arrangements such as occlusions. To that end, we introduce a video prediction model that can keep track of objects through occlusion by incorporating temporal skip-connections. Together with a novel planning criterion and action space formulation, we demonstrate that this model substantially outperforms prior work on video prediction-based control. Our results show manipulation of objects not seen during training, handling multiple objects, and pushing objects around obstructions. These results represent a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robot learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v78-frederik ebert17a, title = {Self-Supervised Visual Planning with Temporal Skip Connections }, author = {Ebert, Frederik and Finn, Chelsea and Lee, Alex X. and Levine, Sergey}, booktitle = {Proceedings of the 1st Annual Conference on Robot Learning}, pages = {344--356}, year = {2017}, editor = {Levine, Sergey and Vanhoucke, Vincent and Goldberg, Ken}, volume = {78}, series = {Proceedings of Machine Learning Research}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v78/frederik ebert17a/frederik ebert17a.pdf}, url = {https://proceedings.mlr.press/v78/frederik-ebert17a.html}, abstract = {In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction. If a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location. However, in complex open-world scenarios, designing a representation for prediction is difficult. In this work, we instead aim to enable self-supervised robot learning through direct video prediction: instead of attempting to design a good representation, we directly predict what the robot will see next, and then use this model to achieve desired goals. A key challenge in video prediction for robotic manipulation is handling complex spatial arrangements such as occlusions. To that end, we introduce a video prediction model that can keep track of objects through occlusion by incorporating temporal skip-connections. Together with a novel planning criterion and action space formulation, we demonstrate that this model substantially outperforms prior work on video prediction-based control. Our results show manipulation of objects not seen during training, handling multiple objects, and pushing objects around obstructions. These results represent a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robot learning. } }
Endnote
%0 Conference Paper %T Self-Supervised Visual Planning with Temporal Skip Connections %A Frederik Ebert %A Chelsea Finn %A Alex X. Lee %A Sergey Levine %B Proceedings of the 1st Annual Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2017 %E Sergey Levine %E Vincent Vanhoucke %E Ken Goldberg %F pmlr-v78-frederik ebert17a %I PMLR %P 344--356 %U https://proceedings.mlr.press/v78/frederik-ebert17a.html %V 78 %X In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction. If a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location. However, in complex open-world scenarios, designing a representation for prediction is difficult. In this work, we instead aim to enable self-supervised robot learning through direct video prediction: instead of attempting to design a good representation, we directly predict what the robot will see next, and then use this model to achieve desired goals. A key challenge in video prediction for robotic manipulation is handling complex spatial arrangements such as occlusions. To that end, we introduce a video prediction model that can keep track of objects through occlusion by incorporating temporal skip-connections. Together with a novel planning criterion and action space formulation, we demonstrate that this model substantially outperforms prior work on video prediction-based control. Our results show manipulation of objects not seen during training, handling multiple objects, and pushing objects around obstructions. These results represent a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robot learning.
APA
Ebert, F., Finn, C., Lee, A.X. & Levine, S.. (2017). Self-Supervised Visual Planning with Temporal Skip Connections . Proceedings of the 1st Annual Conference on Robot Learning, in Proceedings of Machine Learning Research 78:344-356 Available from https://proceedings.mlr.press/v78/frederik-ebert17a.html.

Related Material