Inferring geometric constraints in human demonstrations

Guru Subramani, Michael Zinn, Michael Gleicher
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:223-236, 2018.

Abstract

This paper presents an approach for inferring geometric constraints in human demonstrations. In our method, geometric constraint models are built to create representations of kinematic constraints such as fixed point, axial rotation, prismatic motion, planar motion and others across multiple degrees of freedom. Our method infers geometric constraints using both kinematic and force/torque information. The approach first fits all the constraint models using kinematic information and evaluates them individually using position, force and moment criteria. Our approach does not require information about the constraint type or contact geometry; it can determine both simultaneously. We present experimental evaluations using instrumented tongs that show how constraints can be robustly inferred in recordings of human demonstrations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v87-subramani18a, title = {Inferring geometric constraints in human demonstrations}, author = {Subramani, Guru and Zinn, Michael and Gleicher, Michael}, booktitle = {Proceedings of The 2nd Conference on Robot Learning}, pages = {223--236}, year = {2018}, editor = {Billard, Aude and Dragan, Anca and Peters, Jan and Morimoto, Jun}, volume = {87}, series = {Proceedings of Machine Learning Research}, month = {29--31 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v87/subramani18a/subramani18a.pdf}, url = {https://proceedings.mlr.press/v87/subramani18a.html}, abstract = {This paper presents an approach for inferring geometric constraints in human demonstrations. In our method, geometric constraint models are built to create representations of kinematic constraints such as fixed point, axial rotation, prismatic motion, planar motion and others across multiple degrees of freedom. Our method infers geometric constraints using both kinematic and force/torque information. The approach first fits all the constraint models using kinematic information and evaluates them individually using position, force and moment criteria. Our approach does not require information about the constraint type or contact geometry; it can determine both simultaneously. We present experimental evaluations using instrumented tongs that show how constraints can be robustly inferred in recordings of human demonstrations. } }
Endnote
%0 Conference Paper %T Inferring geometric constraints in human demonstrations %A Guru Subramani %A Michael Zinn %A Michael Gleicher %B Proceedings of The 2nd Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2018 %E Aude Billard %E Anca Dragan %E Jan Peters %E Jun Morimoto %F pmlr-v87-subramani18a %I PMLR %P 223--236 %U https://proceedings.mlr.press/v87/subramani18a.html %V 87 %X This paper presents an approach for inferring geometric constraints in human demonstrations. In our method, geometric constraint models are built to create representations of kinematic constraints such as fixed point, axial rotation, prismatic motion, planar motion and others across multiple degrees of freedom. Our method infers geometric constraints using both kinematic and force/torque information. The approach first fits all the constraint models using kinematic information and evaluates them individually using position, force and moment criteria. Our approach does not require information about the constraint type or contact geometry; it can determine both simultaneously. We present experimental evaluations using instrumented tongs that show how constraints can be robustly inferred in recordings of human demonstrations.
APA
Subramani, G., Zinn, M. & Gleicher, M.. (2018). Inferring geometric constraints in human demonstrations. Proceedings of The 2nd Conference on Robot Learning, in Proceedings of Machine Learning Research 87:223-236 Available from https://proceedings.mlr.press/v87/subramani18a.html.

Related Material