Learning Predictive Representations for Deformable Objects Using Contrastive Estimation

Wilson Yan, Ashwin Vangipuram, Pieter Abbeel, Lerrel Pinto
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:564-574, 2021.

Abstract

Using visual model-based learning for deformable object manipulation is challenging due to difficulties in learning plannable visual representations along with complex dynamic models. In this work, we propose a new learning framework that jointly optimizes both the visual representation model and the dynamics model using contrastive estimation. Using simulation data collected by randomly perturbing deformable objects on a table, we learn latent dynamics models for these objects in an offline fashion. Then, using the learned models, we use simple model-based planning to solve challenging deformable object manipulation tasks such as spreading ropes and cloths. Experimentally, we show substantial improvements in performance over standard model-based learning techniques across our rope and cloth manipulation suite. Finally, we transfer our visual manipulation policies trained on data purely collected in simulation to a real PR2 robot through domain randomization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-yan21a, title = {Learning Predictive Representations for Deformable Objects Using Contrastive Estimation}, author = {Yan, Wilson and Vangipuram, Ashwin and Abbeel, Pieter and Pinto, Lerrel}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {564--574}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/yan21a/yan21a.pdf}, url = {https://proceedings.mlr.press/v155/yan21a.html}, abstract = {Using visual model-based learning for deformable object manipulation is challenging due to difficulties in learning plannable visual representations along with complex dynamic models. In this work, we propose a new learning framework that jointly optimizes both the visual representation model and the dynamics model using contrastive estimation. Using simulation data collected by randomly perturbing deformable objects on a table, we learn latent dynamics models for these objects in an offline fashion. Then, using the learned models, we use simple model-based planning to solve challenging deformable object manipulation tasks such as spreading ropes and cloths. Experimentally, we show substantial improvements in performance over standard model-based learning techniques across our rope and cloth manipulation suite. Finally, we transfer our visual manipulation policies trained on data purely collected in simulation to a real PR2 robot through domain randomization.} }
Endnote
%0 Conference Paper %T Learning Predictive Representations for Deformable Objects Using Contrastive Estimation %A Wilson Yan %A Ashwin Vangipuram %A Pieter Abbeel %A Lerrel Pinto %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-yan21a %I PMLR %P 564--574 %U https://proceedings.mlr.press/v155/yan21a.html %V 155 %X Using visual model-based learning for deformable object manipulation is challenging due to difficulties in learning plannable visual representations along with complex dynamic models. In this work, we propose a new learning framework that jointly optimizes both the visual representation model and the dynamics model using contrastive estimation. Using simulation data collected by randomly perturbing deformable objects on a table, we learn latent dynamics models for these objects in an offline fashion. Then, using the learned models, we use simple model-based planning to solve challenging deformable object manipulation tasks such as spreading ropes and cloths. Experimentally, we show substantial improvements in performance over standard model-based learning techniques across our rope and cloth manipulation suite. Finally, we transfer our visual manipulation policies trained on data purely collected in simulation to a real PR2 robot through domain randomization.
APA
Yan, W., Vangipuram, A., Abbeel, P. & Pinto, L.. (2021). Learning Predictive Representations for Deformable Objects Using Contrastive Estimation. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:564-574 Available from https://proceedings.mlr.press/v155/yan21a.html.

Related Material