Generative Category-Level Shape and Pose Estimation with Semantic Primitives

Guanglin Li, Yifeng Li, Zhichao Ye, Qihang Zhang, Tao Kong, Zhaopeng Cui, Guofeng Zhang
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1390-1400, 2023.

Abstract

Empowering autonomous agents with 3D understanding for daily objects is a grand challenge in robotics applications. When exploring in an unknown environment, existing methods for object pose estimation are still not satisfactory due to the diversity of object shapes. In this paper, we propose a novel framework for category-level object shape and pose estimation from a single RGB-D image. To handle the intra-category variation, we adopt a semantic primitive representation that encodes diverse shapes into a unified latent space, which is the key to establish reliable correspondences between observed point clouds and estimated shapes. Then, by using a SIM(3)-invariant shape descriptor, we gracefully decouple the shape and pose of an object, thus supporting latent shape optimization of target objects in arbitrary poses. Extensive experiments show that the proposed method achieves SOTA pose estimation performance and better generalization in the real-world dataset. Code and video are available at https://zju3dv.github.io/gCasp.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-li23d, title = {Generative Category-Level Shape and Pose Estimation with Semantic Primitives}, author = {Li, Guanglin and Li, Yifeng and Ye, Zhichao and Zhang, Qihang and Kong, Tao and Cui, Zhaopeng and Zhang, Guofeng}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1390--1400}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/li23d/li23d.pdf}, url = {https://proceedings.mlr.press/v205/li23d.html}, abstract = {Empowering autonomous agents with 3D understanding for daily objects is a grand challenge in robotics applications. When exploring in an unknown environment, existing methods for object pose estimation are still not satisfactory due to the diversity of object shapes. In this paper, we propose a novel framework for category-level object shape and pose estimation from a single RGB-D image. To handle the intra-category variation, we adopt a semantic primitive representation that encodes diverse shapes into a unified latent space, which is the key to establish reliable correspondences between observed point clouds and estimated shapes. Then, by using a SIM(3)-invariant shape descriptor, we gracefully decouple the shape and pose of an object, thus supporting latent shape optimization of target objects in arbitrary poses. Extensive experiments show that the proposed method achieves SOTA pose estimation performance and better generalization in the real-world dataset. Code and video are available at https://zju3dv.github.io/gCasp.} }
Endnote
%0 Conference Paper %T Generative Category-Level Shape and Pose Estimation with Semantic Primitives %A Guanglin Li %A Yifeng Li %A Zhichao Ye %A Qihang Zhang %A Tao Kong %A Zhaopeng Cui %A Guofeng Zhang %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-li23d %I PMLR %P 1390--1400 %U https://proceedings.mlr.press/v205/li23d.html %V 205 %X Empowering autonomous agents with 3D understanding for daily objects is a grand challenge in robotics applications. When exploring in an unknown environment, existing methods for object pose estimation are still not satisfactory due to the diversity of object shapes. In this paper, we propose a novel framework for category-level object shape and pose estimation from a single RGB-D image. To handle the intra-category variation, we adopt a semantic primitive representation that encodes diverse shapes into a unified latent space, which is the key to establish reliable correspondences between observed point clouds and estimated shapes. Then, by using a SIM(3)-invariant shape descriptor, we gracefully decouple the shape and pose of an object, thus supporting latent shape optimization of target objects in arbitrary poses. Extensive experiments show that the proposed method achieves SOTA pose estimation performance and better generalization in the real-world dataset. Code and video are available at https://zju3dv.github.io/gCasp.
APA
Li, G., Li, Y., Ye, Z., Zhang, Q., Kong, T., Cui, Z. & Zhang, G.. (2023). Generative Category-Level Shape and Pose Estimation with Semantic Primitives. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1390-1400 Available from https://proceedings.mlr.press/v205/li23d.html.

Related Material