Learning Dense Visual Descriptors using Image Augmentations for Robot Manipulation Tasks

Christian Graf, David B. Adrian, Joshua Weil, Miroslav Gabriel, Philipp Schillinger, Markus Spies, Heiko Neumann, Andras Gabor Kupcsik
Proceedings of The 6th Conference on Robot Learning, PMLR 205:871-880, 2023.

Abstract

We propose a self-supervised training approach for learning view-invariant dense visual descriptors using image augmentations. Unlike existing works, which often require complex datasets, such as registered RGBD sequences, we train on an unordered set of RGB images. This allows for learning from a single camera view, e.g., in an existing robotic cell with a fix-mounted camera. We create synthetic views and dense pixel correspondences using data augmentations. We find our descriptors are competitive to the existing methods, despite the simpler data recording and setup requirements. We show that training on synthetic correspondences provides descriptor consistency across a broad range of camera views. We compare against training with geometric correspondence from multiple views and provide ablation studies. We also show a robotic bin-picking experiment using descriptors learned from a fix-mounted camera for defining grasp preferences.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-graf23a, title = {Learning Dense Visual Descriptors using Image Augmentations for Robot Manipulation Tasks}, author = {Graf, Christian and Adrian, David B. and Weil, Joshua and Gabriel, Miroslav and Schillinger, Philipp and Spies, Markus and Neumann, Heiko and Kupcsik, Andras Gabor}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {871--880}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/graf23a/graf23a.pdf}, url = {https://proceedings.mlr.press/v205/graf23a.html}, abstract = {We propose a self-supervised training approach for learning view-invariant dense visual descriptors using image augmentations. Unlike existing works, which often require complex datasets, such as registered RGBD sequences, we train on an unordered set of RGB images. This allows for learning from a single camera view, e.g., in an existing robotic cell with a fix-mounted camera. We create synthetic views and dense pixel correspondences using data augmentations. We find our descriptors are competitive to the existing methods, despite the simpler data recording and setup requirements. We show that training on synthetic correspondences provides descriptor consistency across a broad range of camera views. We compare against training with geometric correspondence from multiple views and provide ablation studies. We also show a robotic bin-picking experiment using descriptors learned from a fix-mounted camera for defining grasp preferences.} }
Endnote
%0 Conference Paper %T Learning Dense Visual Descriptors using Image Augmentations for Robot Manipulation Tasks %A Christian Graf %A David B. Adrian %A Joshua Weil %A Miroslav Gabriel %A Philipp Schillinger %A Markus Spies %A Heiko Neumann %A Andras Gabor Kupcsik %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-graf23a %I PMLR %P 871--880 %U https://proceedings.mlr.press/v205/graf23a.html %V 205 %X We propose a self-supervised training approach for learning view-invariant dense visual descriptors using image augmentations. Unlike existing works, which often require complex datasets, such as registered RGBD sequences, we train on an unordered set of RGB images. This allows for learning from a single camera view, e.g., in an existing robotic cell with a fix-mounted camera. We create synthetic views and dense pixel correspondences using data augmentations. We find our descriptors are competitive to the existing methods, despite the simpler data recording and setup requirements. We show that training on synthetic correspondences provides descriptor consistency across a broad range of camera views. We compare against training with geometric correspondence from multiple views and provide ablation studies. We also show a robotic bin-picking experiment using descriptors learned from a fix-mounted camera for defining grasp preferences.
APA
Graf, C., Adrian, D.B., Weil, J., Gabriel, M., Schillinger, P., Spies, M., Neumann, H. & Kupcsik, A.G.. (2023). Learning Dense Visual Descriptors using Image Augmentations for Robot Manipulation Tasks. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:871-880 Available from https://proceedings.mlr.press/v205/graf23a.html.

Related Material