Fast robust peg-in-hole insertion with continuous visual servoing

Rasmus Haugaard, Jeppe Langaa, Christoffer Sloth, Anders Buch
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1696-1705, 2021.

Abstract

This paper demonstrates a visual servoing method which is robust towards uncertainties related to system calibration and grasping, while significantly reducing the peg-in-hole time compared to classical methods and recent attempts based on deep learning. The proposed visual servoing method is based on peg and hole point estimates from a deep neural network in a multi-cam setup, where the model is trained on purely synthetic data. Empirical results show that the learnt model generalizes to the real world, allowing for higher success rates and lower cycle times than existing approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-haugaard21a, title = {Fast robust peg-in-hole insertion with continuous visual servoing}, author = {Haugaard, Rasmus and Langaa, Jeppe and Sloth, Christoffer and Buch, Anders}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1696--1705}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/haugaard21a/haugaard21a.pdf}, url = {https://proceedings.mlr.press/v155/haugaard21a.html}, abstract = {This paper demonstrates a visual servoing method which is robust towards uncertainties related to system calibration and grasping, while significantly reducing the peg-in-hole time compared to classical methods and recent attempts based on deep learning. The proposed visual servoing method is based on peg and hole point estimates from a deep neural network in a multi-cam setup, where the model is trained on purely synthetic data. Empirical results show that the learnt model generalizes to the real world, allowing for higher success rates and lower cycle times than existing approaches.} }
Endnote
%0 Conference Paper %T Fast robust peg-in-hole insertion with continuous visual servoing %A Rasmus Haugaard %A Jeppe Langaa %A Christoffer Sloth %A Anders Buch %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-haugaard21a %I PMLR %P 1696--1705 %U https://proceedings.mlr.press/v155/haugaard21a.html %V 155 %X This paper demonstrates a visual servoing method which is robust towards uncertainties related to system calibration and grasping, while significantly reducing the peg-in-hole time compared to classical methods and recent attempts based on deep learning. The proposed visual servoing method is based on peg and hole point estimates from a deep neural network in a multi-cam setup, where the model is trained on purely synthetic data. Empirical results show that the learnt model generalizes to the real world, allowing for higher success rates and lower cycle times than existing approaches.
APA
Haugaard, R., Langaa, J., Sloth, C. & Buch, A.. (2021). Fast robust peg-in-hole insertion with continuous visual servoing. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1696-1705 Available from https://proceedings.mlr.press/v155/haugaard21a.html.

Related Material