T2SQNet: A Recognition Model for Manipulating Partially Observed Transparent Tableware Objects

Young Hun Kim, Seungyeon Kim, Yonghyeon Lee, Frank C. Park
Proceedings of The 8th Conference on Robot Learning, PMLR 270:3622-3655, 2025.

Abstract

Recognizing and manipulating transparent tableware from partial view RGB image observations is made challenging by the difficulty in obtaining reliable depth measurements of transparent objects. In this paper we present the Transparent Tableware SuperQuadric Network (T2SQNet), a neural network model that leverages a family of newly extended deformable superquadrics to produce low-dimensional, instance-wise and accurate 3D geometric representations of transparent objects from partial views. As a byproduct and contribution of independent interest, we also present TablewareNet, a publicly available toolset of seven parametrized shapes based on our extended deformable superquadrics, that can be used to generate new datasets of tableware objects of diverse shapes and sizes. Experiments with T2SQNet trained with TablewareNet show that T2SQNet outperforms existing methods in recognizing transparent objects, in some cases by significant margins, and can be effectively used in robotic applications like decluttering and target retrieval.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-kim25d, title = {T$^2$SQNet: A Recognition Model for Manipulating Partially Observed Transparent Tableware Objects}, author = {Kim, Young Hun and Kim, Seungyeon and Lee, Yonghyeon and Park, Frank C.}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {3622--3655}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/kim25d/kim25d.pdf}, url = {https://proceedings.mlr.press/v270/kim25d.html}, abstract = {Recognizing and manipulating transparent tableware from partial view RGB image observations is made challenging by the difficulty in obtaining reliable depth measurements of transparent objects. In this paper we present the Transparent Tableware SuperQuadric Network (T$^2$SQNet), a neural network model that leverages a family of newly extended deformable superquadrics to produce low-dimensional, instance-wise and accurate 3D geometric representations of transparent objects from partial views. As a byproduct and contribution of independent interest, we also present TablewareNet, a publicly available toolset of seven parametrized shapes based on our extended deformable superquadrics, that can be used to generate new datasets of tableware objects of diverse shapes and sizes. Experiments with T$^2$SQNet trained with TablewareNet show that T$^2$SQNet outperforms existing methods in recognizing transparent objects, in some cases by significant margins, and can be effectively used in robotic applications like decluttering and target retrieval.} }
Endnote
%0 Conference Paper %T T$^2$SQNet: A Recognition Model for Manipulating Partially Observed Transparent Tableware Objects %A Young Hun Kim %A Seungyeon Kim %A Yonghyeon Lee %A Frank C. Park %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-kim25d %I PMLR %P 3622--3655 %U https://proceedings.mlr.press/v270/kim25d.html %V 270 %X Recognizing and manipulating transparent tableware from partial view RGB image observations is made challenging by the difficulty in obtaining reliable depth measurements of transparent objects. In this paper we present the Transparent Tableware SuperQuadric Network (T$^2$SQNet), a neural network model that leverages a family of newly extended deformable superquadrics to produce low-dimensional, instance-wise and accurate 3D geometric representations of transparent objects from partial views. As a byproduct and contribution of independent interest, we also present TablewareNet, a publicly available toolset of seven parametrized shapes based on our extended deformable superquadrics, that can be used to generate new datasets of tableware objects of diverse shapes and sizes. Experiments with T$^2$SQNet trained with TablewareNet show that T$^2$SQNet outperforms existing methods in recognizing transparent objects, in some cases by significant margins, and can be effectively used in robotic applications like decluttering and target retrieval.
APA
Kim, Y.H., Kim, S., Lee, Y. & Park, F.C.. (2025). T$^2$SQNet: A Recognition Model for Manipulating Partially Observed Transparent Tableware Objects. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:3622-3655 Available from https://proceedings.mlr.press/v270/kim25d.html.

Related Material