RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning

Lawrence Yunliang Chen, Chenfeng Xu, Karthik Dharmarajan, Richard Cheng, Kurt Keutzer, Masayoshi Tomizuka, Quan Vuong, Ken Goldberg
Proceedings of The 8th Conference on Robot Learning, PMLR 270:209-233, 2025.

Abstract

Scaling up robot learning requires large and diverse datasets, and how to efficiently reuse collected data and transfer policies to new embodiments remains an open question. Emerging research such as the Open-X Embodiment (OXE) project has shown promise in leveraging skills by combining datasets including different robots. However, imbalances in the distribution of robot types and camera angles in many datasets make policies prone to overfit. To mitigate this issue, we propose RoVi-Aug, which leverages state-of-the-art image-to-image generative models to augment robot data by synthesizing demonstrations with different robots and camera views. Through extensive physical experiments, we show that, by training on robot- and viewpoint-augmented data, RoVi-Aug can zero-shot deploy on an unseen robot with significantly different camera angles. Compared to test-time adaptation algorithms such as Mirage, RoVi-Aug requires no extra processing at test time, does not assume known camera angles, and allows policy fine-tuning. Moreover, by co-training on both the original and augmented robot datasets, RoVi-Aug can learn multi-robot and multi-task policies, enabling more efficient transfer between robots and skills and improving success rates by up to 30%.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-chen25a, title = {RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning}, author = {Chen, Lawrence Yunliang and Xu, Chenfeng and Dharmarajan, Karthik and Cheng, Richard and Keutzer, Kurt and Tomizuka, Masayoshi and Vuong, Quan and Goldberg, Ken}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {209--233}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/chen25a/chen25a.pdf}, url = {https://proceedings.mlr.press/v270/chen25a.html}, abstract = {Scaling up robot learning requires large and diverse datasets, and how to efficiently reuse collected data and transfer policies to new embodiments remains an open question. Emerging research such as the Open-X Embodiment (OXE) project has shown promise in leveraging skills by combining datasets including different robots. However, imbalances in the distribution of robot types and camera angles in many datasets make policies prone to overfit. To mitigate this issue, we propose RoVi-Aug, which leverages state-of-the-art image-to-image generative models to augment robot data by synthesizing demonstrations with different robots and camera views. Through extensive physical experiments, we show that, by training on robot- and viewpoint-augmented data, RoVi-Aug can zero-shot deploy on an unseen robot with significantly different camera angles. Compared to test-time adaptation algorithms such as Mirage, RoVi-Aug requires no extra processing at test time, does not assume known camera angles, and allows policy fine-tuning. Moreover, by co-training on both the original and augmented robot datasets, RoVi-Aug can learn multi-robot and multi-task policies, enabling more efficient transfer between robots and skills and improving success rates by up to 30%.} }
Endnote
%0 Conference Paper %T RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning %A Lawrence Yunliang Chen %A Chenfeng Xu %A Karthik Dharmarajan %A Richard Cheng %A Kurt Keutzer %A Masayoshi Tomizuka %A Quan Vuong %A Ken Goldberg %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-chen25a %I PMLR %P 209--233 %U https://proceedings.mlr.press/v270/chen25a.html %V 270 %X Scaling up robot learning requires large and diverse datasets, and how to efficiently reuse collected data and transfer policies to new embodiments remains an open question. Emerging research such as the Open-X Embodiment (OXE) project has shown promise in leveraging skills by combining datasets including different robots. However, imbalances in the distribution of robot types and camera angles in many datasets make policies prone to overfit. To mitigate this issue, we propose RoVi-Aug, which leverages state-of-the-art image-to-image generative models to augment robot data by synthesizing demonstrations with different robots and camera views. Through extensive physical experiments, we show that, by training on robot- and viewpoint-augmented data, RoVi-Aug can zero-shot deploy on an unseen robot with significantly different camera angles. Compared to test-time adaptation algorithms such as Mirage, RoVi-Aug requires no extra processing at test time, does not assume known camera angles, and allows policy fine-tuning. Moreover, by co-training on both the original and augmented robot datasets, RoVi-Aug can learn multi-robot and multi-task policies, enabling more efficient transfer between robots and skills and improving success rates by up to 30%.
APA
Chen, L.Y., Xu, C., Dharmarajan, K., Cheng, R., Keutzer, K., Tomizuka, M., Vuong, Q. & Goldberg, K.. (2025). RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:209-233 Available from https://proceedings.mlr.press/v270/chen25a.html.

Related Material