UniTac2Pose: A Unified Approach Learned in Simulation for Category-level Visuotactile In-hand Pose Estimation

Mingdong Wu, Long Yang, Jin Liu, Weiyao Huang, Lehong Wu, Zelin Chen, Daolin Ma, Hao Dong
Proceedings of The 9th Conference on Robot Learning, PMLR 305:4367-4384, 2025.

Abstract

Accurate estimation of the in-hand pose of an object based on its CAD model is crucial in both industrial applications and everyday tasks—ranging from positioning workpieces and assembling components to seamlessly inserting devices like USB connectors. While existing methods often rely on regression, feature matching, or registration techniques, achieving high precision and generalizability to unseen CAD models remains a significant challenge. In this paper, we propose a novel three-stage framework for in-hand pose estimation. The first stage involves sampling and pre-ranking pose candidates, followed by iterative refinement of these candidates in the second stage. In the final stage, post-ranking is applied to identify the most likely pose candidates. These stages are governed by a unified energy-based diffusion model, which is trained solely on simulated data. This energy model simultaneously generates gradients to refine pose estimates and produces an energy scalar that quantifies the quality of the pose estimates. Additionally, inspired by the computer vision domain, we incorporate a render-compare architecture within the energy-based score network to significantly enhance sim-to-real performance, as demonstrated by our ablation studies. Extensive experimental evaluations show that our method outperforms conventional baselines based on regression, matching, and registration techniques, while also exhibiting strong generalization to previously unseen CAD models. Moreover, our approach integrates tactile object pose estimation, pose tracking, and uncertainty estimation into a unified system, enabling robust performance across a variety of real-world conditions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-wu25d, title = {UniTac2Pose: A Unified Approach Learned in Simulation for Category-level Visuotactile In-hand Pose Estimation}, author = {Wu, Mingdong and Yang, Long and Liu, Jin and Huang, Weiyao and Wu, Lehong and Chen, Zelin and Ma, Daolin and Dong, Hao}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {4367--4384}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/wu25d/wu25d.pdf}, url = {https://proceedings.mlr.press/v305/wu25d.html}, abstract = {Accurate estimation of the in-hand pose of an object based on its CAD model is crucial in both industrial applications and everyday tasks—ranging from positioning workpieces and assembling components to seamlessly inserting devices like USB connectors. While existing methods often rely on regression, feature matching, or registration techniques, achieving high precision and generalizability to unseen CAD models remains a significant challenge. In this paper, we propose a novel three-stage framework for in-hand pose estimation. The first stage involves sampling and pre-ranking pose candidates, followed by iterative refinement of these candidates in the second stage. In the final stage, post-ranking is applied to identify the most likely pose candidates. These stages are governed by a unified energy-based diffusion model, which is trained solely on simulated data. This energy model simultaneously generates gradients to refine pose estimates and produces an energy scalar that quantifies the quality of the pose estimates. Additionally, inspired by the computer vision domain, we incorporate a render-compare architecture within the energy-based score network to significantly enhance sim-to-real performance, as demonstrated by our ablation studies. Extensive experimental evaluations show that our method outperforms conventional baselines based on regression, matching, and registration techniques, while also exhibiting strong generalization to previously unseen CAD models. Moreover, our approach integrates tactile object pose estimation, pose tracking, and uncertainty estimation into a unified system, enabling robust performance across a variety of real-world conditions.} }
Endnote
%0 Conference Paper %T UniTac2Pose: A Unified Approach Learned in Simulation for Category-level Visuotactile In-hand Pose Estimation %A Mingdong Wu %A Long Yang %A Jin Liu %A Weiyao Huang %A Lehong Wu %A Zelin Chen %A Daolin Ma %A Hao Dong %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-wu25d %I PMLR %P 4367--4384 %U https://proceedings.mlr.press/v305/wu25d.html %V 305 %X Accurate estimation of the in-hand pose of an object based on its CAD model is crucial in both industrial applications and everyday tasks—ranging from positioning workpieces and assembling components to seamlessly inserting devices like USB connectors. While existing methods often rely on regression, feature matching, or registration techniques, achieving high precision and generalizability to unseen CAD models remains a significant challenge. In this paper, we propose a novel three-stage framework for in-hand pose estimation. The first stage involves sampling and pre-ranking pose candidates, followed by iterative refinement of these candidates in the second stage. In the final stage, post-ranking is applied to identify the most likely pose candidates. These stages are governed by a unified energy-based diffusion model, which is trained solely on simulated data. This energy model simultaneously generates gradients to refine pose estimates and produces an energy scalar that quantifies the quality of the pose estimates. Additionally, inspired by the computer vision domain, we incorporate a render-compare architecture within the energy-based score network to significantly enhance sim-to-real performance, as demonstrated by our ablation studies. Extensive experimental evaluations show that our method outperforms conventional baselines based on regression, matching, and registration techniques, while also exhibiting strong generalization to previously unseen CAD models. Moreover, our approach integrates tactile object pose estimation, pose tracking, and uncertainty estimation into a unified system, enabling robust performance across a variety of real-world conditions.
APA
Wu, M., Yang, L., Liu, J., Huang, W., Wu, L., Chen, Z., Ma, D. & Dong, H.. (2025). UniTac2Pose: A Unified Approach Learned in Simulation for Category-level Visuotactile In-hand Pose Estimation. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:4367-4384 Available from https://proceedings.mlr.press/v305/wu25d.html.

Related Material