An Unbiased Look at Datasets for Visuo-Motor Pre-Training

Sudeep Dasari, Mohan Kumar Srirama, Unnat Jain, Abhinav Gupta
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1183-1198, 2023.

Abstract

Visual representation learning hold great promise for robotics, but is severely hampered by the scarcity and homogeneity of robotics datasets. Recent works address this problem by pre-training visual representations on large-scale but out-of-domain data (e.g., videos of egocentric interactions) and then transferring them to target robotics tasks. While the field is heavily focused on developing better pre-training algorithms, we find that dataset choice is just as important to this paradigm’s success. After all, the representation can only learn the structures or priors present in the pre-training dataset. To this end, we flip the focus on algorithms, and instead conduct a dataset centric analysis of robotic pre-training. Our findings call into question some common wisdom in the field. We observe that traditional vision datasets (like ImageNet, Kinetics and 100 Days of Hands) are surprisingly competitive options for visuo-motor representation learning, and that the pre-training dataset’s image distribution matters more than its size. Finally, we show that common simulation benchmarks are not a reliable proxy for real world performance and that simple regularization strategies can dramatically improve real world policy learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-dasari23a, title = {An Unbiased Look at Datasets for Visuo-Motor Pre-Training}, author = {Dasari, Sudeep and Srirama, Mohan Kumar and Jain, Unnat and Gupta, Abhinav}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1183--1198}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/dasari23a/dasari23a.pdf}, url = {https://proceedings.mlr.press/v229/dasari23a.html}, abstract = {Visual representation learning hold great promise for robotics, but is severely hampered by the scarcity and homogeneity of robotics datasets. Recent works address this problem by pre-training visual representations on large-scale but out-of-domain data (e.g., videos of egocentric interactions) and then transferring them to target robotics tasks. While the field is heavily focused on developing better pre-training algorithms, we find that dataset choice is just as important to this paradigm’s success. After all, the representation can only learn the structures or priors present in the pre-training dataset. To this end, we flip the focus on algorithms, and instead conduct a dataset centric analysis of robotic pre-training. Our findings call into question some common wisdom in the field. We observe that traditional vision datasets (like ImageNet, Kinetics and 100 Days of Hands) are surprisingly competitive options for visuo-motor representation learning, and that the pre-training dataset’s image distribution matters more than its size. Finally, we show that common simulation benchmarks are not a reliable proxy for real world performance and that simple regularization strategies can dramatically improve real world policy learning.} }
Endnote
%0 Conference Paper %T An Unbiased Look at Datasets for Visuo-Motor Pre-Training %A Sudeep Dasari %A Mohan Kumar Srirama %A Unnat Jain %A Abhinav Gupta %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-dasari23a %I PMLR %P 1183--1198 %U https://proceedings.mlr.press/v229/dasari23a.html %V 229 %X Visual representation learning hold great promise for robotics, but is severely hampered by the scarcity and homogeneity of robotics datasets. Recent works address this problem by pre-training visual representations on large-scale but out-of-domain data (e.g., videos of egocentric interactions) and then transferring them to target robotics tasks. While the field is heavily focused on developing better pre-training algorithms, we find that dataset choice is just as important to this paradigm’s success. After all, the representation can only learn the structures or priors present in the pre-training dataset. To this end, we flip the focus on algorithms, and instead conduct a dataset centric analysis of robotic pre-training. Our findings call into question some common wisdom in the field. We observe that traditional vision datasets (like ImageNet, Kinetics and 100 Days of Hands) are surprisingly competitive options for visuo-motor representation learning, and that the pre-training dataset’s image distribution matters more than its size. Finally, we show that common simulation benchmarks are not a reliable proxy for real world performance and that simple regularization strategies can dramatically improve real world policy learning.
APA
Dasari, S., Srirama, M.K., Jain, U. & Gupta, A.. (2023). An Unbiased Look at Datasets for Visuo-Motor Pre-Training. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1183-1198 Available from https://proceedings.mlr.press/v229/dasari23a.html.

Related Material