Unsupervised Learning of Visual 3D Keypoints for Control

Boyuan Chen, Pieter Abbeel, Deepak Pathak
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1539-1549, 2021.

Abstract

Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations. Prior works show that structured latent space such as visual keypoints often outperforms unstructured representations for robotic control. However, most of these representations, whether structured or unstructured are learned in a 2D space even though the control tasks are usually performed in a 3D environment. In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner. The input images are embedded into latent 3D keypoints via a differentiable encoder which is trained to optimize both a multi-view consistency loss and downstream task objective. These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space. The proposed approach outperforms prior state-of-art methods across a variety of reinforcement learning benchmarks. Code and videos at https://buoyancy99.github.io/unsup-3d-keypoints/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-chen21b, title = {Unsupervised Learning of Visual 3D Keypoints for Control}, author = {Chen, Boyuan and Abbeel, Pieter and Pathak, Deepak}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1539--1549}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/chen21b/chen21b.pdf}, url = {https://proceedings.mlr.press/v139/chen21b.html}, abstract = {Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations. Prior works show that structured latent space such as visual keypoints often outperforms unstructured representations for robotic control. However, most of these representations, whether structured or unstructured are learned in a 2D space even though the control tasks are usually performed in a 3D environment. In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner. The input images are embedded into latent 3D keypoints via a differentiable encoder which is trained to optimize both a multi-view consistency loss and downstream task objective. These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space. The proposed approach outperforms prior state-of-art methods across a variety of reinforcement learning benchmarks. Code and videos at https://buoyancy99.github.io/unsup-3d-keypoints/.} }
Endnote
%0 Conference Paper %T Unsupervised Learning of Visual 3D Keypoints for Control %A Boyuan Chen %A Pieter Abbeel %A Deepak Pathak %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-chen21b %I PMLR %P 1539--1549 %U https://proceedings.mlr.press/v139/chen21b.html %V 139 %X Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations. Prior works show that structured latent space such as visual keypoints often outperforms unstructured representations for robotic control. However, most of these representations, whether structured or unstructured are learned in a 2D space even though the control tasks are usually performed in a 3D environment. In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner. The input images are embedded into latent 3D keypoints via a differentiable encoder which is trained to optimize both a multi-view consistency loss and downstream task objective. These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space. The proposed approach outperforms prior state-of-art methods across a variety of reinforcement learning benchmarks. Code and videos at https://buoyancy99.github.io/unsup-3d-keypoints/.
APA
Chen, B., Abbeel, P. & Pathak, D.. (2021). Unsupervised Learning of Visual 3D Keypoints for Control. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1539-1549 Available from https://proceedings.mlr.press/v139/chen21b.html.

Related Material