PoET: Pose Estimation Transformer for Single-View, Multi-Object 6D Pose Estimation

Thomas Georg Jantos, Mohamed Amin Hamdad, Wolfgang Granig, Stephan Weiss, Jan Steinbrener
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1060-1070, 2023.

Abstract

Accurate 6D object pose estimation is an important task for a variety of robotic applications such as grasping or localization. It is a challenging task due to object symmetries, clutter and occlusion, but it becomes more challenging when additional information, such as depth and 3D models, is not provided. We present a transformer-based approach that takes an RGB image as input and predicts a 6D pose for each object in the image. Besides the image, our network does not require any additional information such as depth maps or 3D object models. First, the image is passed through an object detector to generate feature maps and to detect objects. Then, the feature maps are fed into a transformer with the detected bounding boxes as additional information. Afterwards, the output object queries are processed by a separate translation and rotation head. We achieve state-of-the-art results for RGB-only approaches on the challenging YCB-V dataset. We illustrate the suitability of the resulting model as pose sensor for a 6-DoF state estimation task. Code is available at https://github.com/aau-cns/poet.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-jantos23a, title = {PoET: Pose Estimation Transformer for Single-View, Multi-Object 6D Pose Estimation}, author = {Jantos, Thomas Georg and Hamdad, Mohamed Amin and Granig, Wolfgang and Weiss, Stephan and Steinbrener, Jan}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1060--1070}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/jantos23a/jantos23a.pdf}, url = {https://proceedings.mlr.press/v205/jantos23a.html}, abstract = {Accurate 6D object pose estimation is an important task for a variety of robotic applications such as grasping or localization. It is a challenging task due to object symmetries, clutter and occlusion, but it becomes more challenging when additional information, such as depth and 3D models, is not provided. We present a transformer-based approach that takes an RGB image as input and predicts a 6D pose for each object in the image. Besides the image, our network does not require any additional information such as depth maps or 3D object models. First, the image is passed through an object detector to generate feature maps and to detect objects. Then, the feature maps are fed into a transformer with the detected bounding boxes as additional information. Afterwards, the output object queries are processed by a separate translation and rotation head. We achieve state-of-the-art results for RGB-only approaches on the challenging YCB-V dataset. We illustrate the suitability of the resulting model as pose sensor for a 6-DoF state estimation task. Code is available at https://github.com/aau-cns/poet.} }
Endnote
%0 Conference Paper %T PoET: Pose Estimation Transformer for Single-View, Multi-Object 6D Pose Estimation %A Thomas Georg Jantos %A Mohamed Amin Hamdad %A Wolfgang Granig %A Stephan Weiss %A Jan Steinbrener %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-jantos23a %I PMLR %P 1060--1070 %U https://proceedings.mlr.press/v205/jantos23a.html %V 205 %X Accurate 6D object pose estimation is an important task for a variety of robotic applications such as grasping or localization. It is a challenging task due to object symmetries, clutter and occlusion, but it becomes more challenging when additional information, such as depth and 3D models, is not provided. We present a transformer-based approach that takes an RGB image as input and predicts a 6D pose for each object in the image. Besides the image, our network does not require any additional information such as depth maps or 3D object models. First, the image is passed through an object detector to generate feature maps and to detect objects. Then, the feature maps are fed into a transformer with the detected bounding boxes as additional information. Afterwards, the output object queries are processed by a separate translation and rotation head. We achieve state-of-the-art results for RGB-only approaches on the challenging YCB-V dataset. We illustrate the suitability of the resulting model as pose sensor for a 6-DoF state estimation task. Code is available at https://github.com/aau-cns/poet.
APA
Jantos, T.G., Hamdad, M.A., Granig, W., Weiss, S. & Steinbrener, J.. (2023). PoET: Pose Estimation Transformer for Single-View, Multi-Object 6D Pose Estimation. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1060-1070 Available from https://proceedings.mlr.press/v205/jantos23a.html.

Related Material