Learning What Information to Give in Partially Observed Domains

Rohan Chitnis, Leslie Pack Kaelbling, Tomas Lozano-Perez
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:724-733, 2018.

Abstract

In many robotic applications, an autonomous agent must act within and explore a partially observed environment that is unobserved by its human team-mate. We consider such a setting in which the agent can, while acting, transmit declarative information to the human that helps them understand aspects of this unseen environment. In this work, we address the algorithmic question of how the agent should plan out what actions to take and what information to transmit. Naturally, one would expect the human to have preferences, which we model information-theoretically by scoring transmitted information based on the change it induces in weighted entropy of the human’s belief state. We formulate this setting as a belief MDP and give a tractable algorithm for solving it approximately. Then, we give an algorithm that allows the agent to learn the human’s preferences online, through exploration. We validate our approach experimentally in simulated discrete and continuous partially observed search-and-recover domains. Visit http://tinyurl.com/chitnis-corl-18 for a supplementary video.

Cite this Paper


BibTeX
@InProceedings{pmlr-v87-chitnis18a, title = {Learning What Information to Give in Partially Observed Domains}, author = {Chitnis, Rohan and Kaelbling, Leslie Pack and Lozano-Perez, Tomas}, booktitle = {Proceedings of The 2nd Conference on Robot Learning}, pages = {724--733}, year = {2018}, editor = {Billard, Aude and Dragan, Anca and Peters, Jan and Morimoto, Jun}, volume = {87}, series = {Proceedings of Machine Learning Research}, month = {29--31 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v87/chitnis18a/chitnis18a.pdf}, url = {https://proceedings.mlr.press/v87/chitnis18a.html}, abstract = {In many robotic applications, an autonomous agent must act within and explore a partially observed environment that is unobserved by its human team-mate. We consider such a setting in which the agent can, while acting, transmit declarative information to the human that helps them understand aspects of this unseen environment. In this work, we address the algorithmic question of how the agent should plan out what actions to take and what information to transmit. Naturally, one would expect the human to have preferences, which we model information-theoretically by scoring transmitted information based on the change it induces in weighted entropy of the human’s belief state. We formulate this setting as a belief MDP and give a tractable algorithm for solving it approximately. Then, we give an algorithm that allows the agent to learn the human’s preferences online, through exploration. We validate our approach experimentally in simulated discrete and continuous partially observed search-and-recover domains. Visit http://tinyurl.com/chitnis-corl-18 for a supplementary video.} }
Endnote
%0 Conference Paper %T Learning What Information to Give in Partially Observed Domains %A Rohan Chitnis %A Leslie Pack Kaelbling %A Tomas Lozano-Perez %B Proceedings of The 2nd Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2018 %E Aude Billard %E Anca Dragan %E Jan Peters %E Jun Morimoto %F pmlr-v87-chitnis18a %I PMLR %P 724--733 %U https://proceedings.mlr.press/v87/chitnis18a.html %V 87 %X In many robotic applications, an autonomous agent must act within and explore a partially observed environment that is unobserved by its human team-mate. We consider such a setting in which the agent can, while acting, transmit declarative information to the human that helps them understand aspects of this unseen environment. In this work, we address the algorithmic question of how the agent should plan out what actions to take and what information to transmit. Naturally, one would expect the human to have preferences, which we model information-theoretically by scoring transmitted information based on the change it induces in weighted entropy of the human’s belief state. We formulate this setting as a belief MDP and give a tractable algorithm for solving it approximately. Then, we give an algorithm that allows the agent to learn the human’s preferences online, through exploration. We validate our approach experimentally in simulated discrete and continuous partially observed search-and-recover domains. Visit http://tinyurl.com/chitnis-corl-18 for a supplementary video.
APA
Chitnis, R., Kaelbling, L.P. & Lozano-Perez, T.. (2018). Learning What Information to Give in Partially Observed Domains. Proceedings of The 2nd Conference on Robot Learning, in Proceedings of Machine Learning Research 87:724-733 Available from https://proceedings.mlr.press/v87/chitnis18a.html.

Related Material