PicoPose: Progressive Pixel-to-Pixel Correspondence Learning for Novel Object Pose Estimation

Lihua Liu, Jiehong Lin, ZhenXin Liu, Kui Jia
Proceedings of The 9th Conference on Robot Learning, PMLR 305:4295-4312, 2025.

Abstract

RGB-based novel object pose estimation is critical for rapid deployment in robotic applications, yet zero-shot generalization remains a key challenge. In this paper, we introduce PicoPose, a novel framework designed to tackle this task using a three-stage pixel-to-pixel correspondence learning process. Firstly, PicoPose matches features from the RGB observation with those from rendered object templates, identifying the best-matched template and establishing coarse correspondences. Secondly, PicoPose smooths the correspondences by globally regressing a 2D affine transformation, including in-plane rotation, scale, and 2D translation, from the coarse correspondence map. Thirdly, PicoPose applies the affine transformation to the feature map of the best-matched template and learns correspondence offsets within local regions to achieve fine-grained correspondences. By progressively refining the correspondences, PicoPose significantly improves the accuracy of object poses computed via PnP/RANSAC. PicoPose achieves state-of-the-art performance on the seven core datasets of the BOP benchmark, demonstrating exceptional generalization to novel objects. Our code and models will be made publicly available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-liu25g, title = {PicoPose: Progressive Pixel-to-Pixel Correspondence Learning for Novel Object Pose Estimation}, author = {Liu, Lihua and Lin, Jiehong and Liu, ZhenXin and Jia, Kui}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {4295--4312}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/liu25g/liu25g.pdf}, url = {https://proceedings.mlr.press/v305/liu25g.html}, abstract = {RGB-based novel object pose estimation is critical for rapid deployment in robotic applications, yet zero-shot generalization remains a key challenge. In this paper, we introduce PicoPose, a novel framework designed to tackle this task using a three-stage pixel-to-pixel correspondence learning process. Firstly, PicoPose matches features from the RGB observation with those from rendered object templates, identifying the best-matched template and establishing coarse correspondences. Secondly, PicoPose smooths the correspondences by globally regressing a 2D affine transformation, including in-plane rotation, scale, and 2D translation, from the coarse correspondence map. Thirdly, PicoPose applies the affine transformation to the feature map of the best-matched template and learns correspondence offsets within local regions to achieve fine-grained correspondences. By progressively refining the correspondences, PicoPose significantly improves the accuracy of object poses computed via PnP/RANSAC. PicoPose achieves state-of-the-art performance on the seven core datasets of the BOP benchmark, demonstrating exceptional generalization to novel objects. Our code and models will be made publicly available.} }
Endnote
%0 Conference Paper %T PicoPose: Progressive Pixel-to-Pixel Correspondence Learning for Novel Object Pose Estimation %A Lihua Liu %A Jiehong Lin %A ZhenXin Liu %A Kui Jia %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-liu25g %I PMLR %P 4295--4312 %U https://proceedings.mlr.press/v305/liu25g.html %V 305 %X RGB-based novel object pose estimation is critical for rapid deployment in robotic applications, yet zero-shot generalization remains a key challenge. In this paper, we introduce PicoPose, a novel framework designed to tackle this task using a three-stage pixel-to-pixel correspondence learning process. Firstly, PicoPose matches features from the RGB observation with those from rendered object templates, identifying the best-matched template and establishing coarse correspondences. Secondly, PicoPose smooths the correspondences by globally regressing a 2D affine transformation, including in-plane rotation, scale, and 2D translation, from the coarse correspondence map. Thirdly, PicoPose applies the affine transformation to the feature map of the best-matched template and learns correspondence offsets within local regions to achieve fine-grained correspondences. By progressively refining the correspondences, PicoPose significantly improves the accuracy of object poses computed via PnP/RANSAC. PicoPose achieves state-of-the-art performance on the seven core datasets of the BOP benchmark, demonstrating exceptional generalization to novel objects. Our code and models will be made publicly available.
APA
Liu, L., Lin, J., Liu, Z. & Jia, K.. (2025). PicoPose: Progressive Pixel-to-Pixel Correspondence Learning for Novel Object Pose Estimation. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:4295-4312 Available from https://proceedings.mlr.press/v305/liu25g.html.

Related Material