Robot Learning with Sensorimotor Pre-training

Ilija Radosavovic, Baifeng Shi, Letian Fu, Ken Goldberg, Trevor Darrell, Jitendra Malik
Proceedings of The 7th Conference on Robot Learning, PMLR 229:683-693, 2023.

Abstract

We present a self-supervised sensorimotor pre-training approach for robotics. Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens. Given a sequence of camera images, proprioceptive robot states, and actions, we encode the sequence into tokens, mask out a subset, and train a model to predict the missing content from the rest. We hypothesize that if a robot can predict the masked-out content it will have acquired a good model of the physical world that can enable it to act. RPT is designed to operate on latent visual representations which makes prediction tractable, enables scaling to larger models, and allows fast inference on a real robot. To evaluate our approach, we collected a dataset of 20,000 real-world trajectories over 9 months using a combination of motion planning and grasping algorithms. We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-radosavovic23a, title = {Robot Learning with Sensorimotor Pre-training}, author = {Radosavovic, Ilija and Shi, Baifeng and Fu, Letian and Goldberg, Ken and Darrell, Trevor and Malik, Jitendra}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {683--693}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/radosavovic23a/radosavovic23a.pdf}, url = {https://proceedings.mlr.press/v229/radosavovic23a.html}, abstract = {We present a self-supervised sensorimotor pre-training approach for robotics. Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens. Given a sequence of camera images, proprioceptive robot states, and actions, we encode the sequence into tokens, mask out a subset, and train a model to predict the missing content from the rest. We hypothesize that if a robot can predict the masked-out content it will have acquired a good model of the physical world that can enable it to act. RPT is designed to operate on latent visual representations which makes prediction tractable, enables scaling to larger models, and allows fast inference on a real robot. To evaluate our approach, we collected a dataset of 20,000 real-world trajectories over 9 months using a combination of motion planning and grasping algorithms. We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.} }
Endnote
%0 Conference Paper %T Robot Learning with Sensorimotor Pre-training %A Ilija Radosavovic %A Baifeng Shi %A Letian Fu %A Ken Goldberg %A Trevor Darrell %A Jitendra Malik %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-radosavovic23a %I PMLR %P 683--693 %U https://proceedings.mlr.press/v229/radosavovic23a.html %V 229 %X We present a self-supervised sensorimotor pre-training approach for robotics. Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens. Given a sequence of camera images, proprioceptive robot states, and actions, we encode the sequence into tokens, mask out a subset, and train a model to predict the missing content from the rest. We hypothesize that if a robot can predict the masked-out content it will have acquired a good model of the physical world that can enable it to act. RPT is designed to operate on latent visual representations which makes prediction tractable, enables scaling to larger models, and allows fast inference on a real robot. To evaluate our approach, we collected a dataset of 20,000 real-world trajectories over 9 months using a combination of motion planning and grasping algorithms. We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
APA
Radosavovic, I., Shi, B., Fu, L., Goldberg, K., Darrell, T. & Malik, J.. (2023). Robot Learning with Sensorimotor Pre-training. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:683-693 Available from https://proceedings.mlr.press/v229/radosavovic23a.html.

Related Material