Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments

Siddharth Patki, Ethan Fahnestock, Thomas M. Howard, Matthew R. Walter
Proceedings of the Conference on Robot Learning, PMLR 100:1201-1210, 2020.

Abstract

Recent advances in data-driven models for grounded language understanding have enabled robots to interpret increasingly complex instructions. Two fundamental limitations of these methods are that most require a full model of the environment to be known a priori, and they attempt to reason over a world representation that is flat and unnecessarily detailed, which limits scalability. Recent semantic mapping methods address partial observability by exploiting language as a sensor to infer a distribution over topological, metric and semantic properties of the environment. However, maintaining a distribution over highly detailed maps that can support grounding of diverse instructions is computationally expensive and hinders real-time human-robot collaboration. We propose a novel framework that learns to adapt perception according to the task in order to maintain compact distributions over semantic maps. Experiments with a mobile manipulator demonstrate more efficient instruction following in a priori unknown environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-patki20a, title = {Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments}, author = {Patki, Siddharth and Fahnestock, Ethan and Howard, Thomas M. and Walter, Matthew R.}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {1201--1210}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/patki20a/patki20a.pdf}, url = {https://proceedings.mlr.press/v100/patki20a.html}, abstract = {Recent advances in data-driven models for grounded language understanding have enabled robots to interpret increasingly complex instructions. Two fundamental limitations of these methods are that most require a full model of the environment to be known a priori, and they attempt to reason over a world representation that is flat and unnecessarily detailed, which limits scalability. Recent semantic mapping methods address partial observability by exploiting language as a sensor to infer a distribution over topological, metric and semantic properties of the environment. However, maintaining a distribution over highly detailed maps that can support grounding of diverse instructions is computationally expensive and hinders real-time human-robot collaboration. We propose a novel framework that learns to adapt perception according to the task in order to maintain compact distributions over semantic maps. Experiments with a mobile manipulator demonstrate more efficient instruction following in a priori unknown environments.} }
Endnote
%0 Conference Paper %T Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments %A Siddharth Patki %A Ethan Fahnestock %A Thomas M. Howard %A Matthew R. Walter %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-patki20a %I PMLR %P 1201--1210 %U https://proceedings.mlr.press/v100/patki20a.html %V 100 %X Recent advances in data-driven models for grounded language understanding have enabled robots to interpret increasingly complex instructions. Two fundamental limitations of these methods are that most require a full model of the environment to be known a priori, and they attempt to reason over a world representation that is flat and unnecessarily detailed, which limits scalability. Recent semantic mapping methods address partial observability by exploiting language as a sensor to infer a distribution over topological, metric and semantic properties of the environment. However, maintaining a distribution over highly detailed maps that can support grounding of diverse instructions is computationally expensive and hinders real-time human-robot collaboration. We propose a novel framework that learns to adapt perception according to the task in order to maintain compact distributions over semantic maps. Experiments with a mobile manipulator demonstrate more efficient instruction following in a priori unknown environments.
APA
Patki, S., Fahnestock, E., Howard, T.M. & Walter, M.R.. (2020). Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:1201-1210 Available from https://proceedings.mlr.press/v100/patki20a.html.

Related Material