Learning to Generalize Kinematic Models to Novel Objects

Ben Abbatematteo, Stefanie Tellex, George Konidaris
Proceedings of the Conference on Robot Learning, PMLR 100:1289-1299, 2020.

Abstract

Robots operating in human environments must be capable of interacting with a wide variety of articulated objects such as cabinets, refrigerators, and drawers. Existing approaches require human demonstration or minutes of interaction to fit kinematic models to each novel object from scratch. We present a framework for estimating the kinematic model and configuration of previously unseen articulated objects, conditioned upon object type, from as little as a single observation. We train our system in simulation with a novel dataset of synthetic articulated objects; at runtime, our model can predict the shape and kinematic model of an object from depth sensor data. We demonstrate that our approach enables a MOVO robot to view an object with its RGB-D sensor, estimate its motion model, and use that estimate to interact with the object.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-abbatematteo20a, title = {Learning to Generalize Kinematic Models to Novel Objects}, author = {Abbatematteo, Ben and Tellex, Stefanie and Konidaris, George}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {1289--1299}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/abbatematteo20a/abbatematteo20a.pdf}, url = {https://proceedings.mlr.press/v100/abbatematteo20a.html}, abstract = {Robots operating in human environments must be capable of interacting with a wide variety of articulated objects such as cabinets, refrigerators, and drawers. Existing approaches require human demonstration or minutes of interaction to fit kinematic models to each novel object from scratch. We present a framework for estimating the kinematic model and configuration of previously unseen articulated objects, conditioned upon object type, from as little as a single observation. We train our system in simulation with a novel dataset of synthetic articulated objects; at runtime, our model can predict the shape and kinematic model of an object from depth sensor data. We demonstrate that our approach enables a MOVO robot to view an object with its RGB-D sensor, estimate its motion model, and use that estimate to interact with the object.} }
Endnote
%0 Conference Paper %T Learning to Generalize Kinematic Models to Novel Objects %A Ben Abbatematteo %A Stefanie Tellex %A George Konidaris %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-abbatematteo20a %I PMLR %P 1289--1299 %U https://proceedings.mlr.press/v100/abbatematteo20a.html %V 100 %X Robots operating in human environments must be capable of interacting with a wide variety of articulated objects such as cabinets, refrigerators, and drawers. Existing approaches require human demonstration or minutes of interaction to fit kinematic models to each novel object from scratch. We present a framework for estimating the kinematic model and configuration of previously unseen articulated objects, conditioned upon object type, from as little as a single observation. We train our system in simulation with a novel dataset of synthetic articulated objects; at runtime, our model can predict the shape and kinematic model of an object from depth sensor data. We demonstrate that our approach enables a MOVO robot to view an object with its RGB-D sensor, estimate its motion model, and use that estimate to interact with the object.
APA
Abbatematteo, B., Tellex, S. & Konidaris, G.. (2020). Learning to Generalize Kinematic Models to Novel Objects. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:1289-1299 Available from https://proceedings.mlr.press/v100/abbatematteo20a.html.

Related Material