The Unsurprising Effectiveness of Pre-Trained Vision Models for Control

Simone Parisi, Aravind Rajeswaran, Senthil Purushwalkam, Abhinav Gupta
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:17359-17371, 2022.

Abstract

Recent years have seen the emergence of pre-trained representations as a powerful abstraction for AI applications in computer vision, natural language, and speech. However, policy learning for control is still dominated by a tabula-rasa learning paradigm, with visuo-motor policies often trained from scratch using data from deployment environments. In this context, we revisit and study the role of pre-trained visual representations for control, and in particular representations trained on large-scale computer vision datasets. Through extensive empirical evaluation in diverse control domains (Habitat, DeepMind Control, Adroit, Franka Kitchen), we isolate and study the importance of different representation training methods, data augmentations, and feature hierarchies. Overall, we find that pre-trained visual representations can be competitive or even better than ground-truth state representations to train control policies. This is in spite of using only out-of-domain data from standard vision datasets, without any in-domain data from the deployment environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-parisi22a, title = {The Unsurprising Effectiveness of Pre-Trained Vision Models for Control}, author = {Parisi, Simone and Rajeswaran, Aravind and Purushwalkam, Senthil and Gupta, Abhinav}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {17359--17371}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/parisi22a/parisi22a.pdf}, url = {https://proceedings.mlr.press/v162/parisi22a.html}, abstract = {Recent years have seen the emergence of pre-trained representations as a powerful abstraction for AI applications in computer vision, natural language, and speech. However, policy learning for control is still dominated by a tabula-rasa learning paradigm, with visuo-motor policies often trained from scratch using data from deployment environments. In this context, we revisit and study the role of pre-trained visual representations for control, and in particular representations trained on large-scale computer vision datasets. Through extensive empirical evaluation in diverse control domains (Habitat, DeepMind Control, Adroit, Franka Kitchen), we isolate and study the importance of different representation training methods, data augmentations, and feature hierarchies. Overall, we find that pre-trained visual representations can be competitive or even better than ground-truth state representations to train control policies. This is in spite of using only out-of-domain data from standard vision datasets, without any in-domain data from the deployment environments.} }
Endnote
%0 Conference Paper %T The Unsurprising Effectiveness of Pre-Trained Vision Models for Control %A Simone Parisi %A Aravind Rajeswaran %A Senthil Purushwalkam %A Abhinav Gupta %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-parisi22a %I PMLR %P 17359--17371 %U https://proceedings.mlr.press/v162/parisi22a.html %V 162 %X Recent years have seen the emergence of pre-trained representations as a powerful abstraction for AI applications in computer vision, natural language, and speech. However, policy learning for control is still dominated by a tabula-rasa learning paradigm, with visuo-motor policies often trained from scratch using data from deployment environments. In this context, we revisit and study the role of pre-trained visual representations for control, and in particular representations trained on large-scale computer vision datasets. Through extensive empirical evaluation in diverse control domains (Habitat, DeepMind Control, Adroit, Franka Kitchen), we isolate and study the importance of different representation training methods, data augmentations, and feature hierarchies. Overall, we find that pre-trained visual representations can be competitive or even better than ground-truth state representations to train control policies. This is in spite of using only out-of-domain data from standard vision datasets, without any in-domain data from the deployment environments.
APA
Parisi, S., Rajeswaran, A., Purushwalkam, S. & Gupta, A.. (2022). The Unsurprising Effectiveness of Pre-Trained Vision Models for Control. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:17359-17371 Available from https://proceedings.mlr.press/v162/parisi22a.html.

Related Material