Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning

Aviv Netanyahu, Tianmin Shu, Joshua Tenenbaum, Pulkit Agrawal
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:16480-16495, 2022.

Abstract

In this work, we consider one-shot imitation learning for object rearrangement tasks, where an AI agent needs to watch a single expert demonstration and learn to perform the same task in different environments. To achieve a strong generalization, the AI agent must infer the spatial goal specification for the task. However, there can be multiple goal specifications that fit the given demonstration. To address this, we propose a reward learning approach, Graph-based Equivalence Mappings (GEM), that can discover spatial goal representations that are aligned with the intended goal specification, enabling successful generalization in unseen environments. Specifically, GEM represents a spatial goal specification by a reward function conditioned on i) a graph indicating important spatial relationships between objects and ii) state equivalence mappings for each edge in the graph indicating invariant properties of the corresponding relationship. GEM combines inverse reinforcement learning and active reward learning to efficiently improve the reward function by utilizing the graph structure and domain randomization enabled by the equivalence mappings. We conducted experiments with simulated oracles and with human subjects. The results show that GEM can drastically improve the generalizability of the learned goal representations over strong baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-netanyahu22a, title = {Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning}, author = {Netanyahu, Aviv and Shu, Tianmin and Tenenbaum, Joshua and Agrawal, Pulkit}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {16480--16495}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/netanyahu22a/netanyahu22a.pdf}, url = {https://proceedings.mlr.press/v162/netanyahu22a.html}, abstract = {In this work, we consider one-shot imitation learning for object rearrangement tasks, where an AI agent needs to watch a single expert demonstration and learn to perform the same task in different environments. To achieve a strong generalization, the AI agent must infer the spatial goal specification for the task. However, there can be multiple goal specifications that fit the given demonstration. To address this, we propose a reward learning approach, Graph-based Equivalence Mappings (GEM), that can discover spatial goal representations that are aligned with the intended goal specification, enabling successful generalization in unseen environments. Specifically, GEM represents a spatial goal specification by a reward function conditioned on i) a graph indicating important spatial relationships between objects and ii) state equivalence mappings for each edge in the graph indicating invariant properties of the corresponding relationship. GEM combines inverse reinforcement learning and active reward learning to efficiently improve the reward function by utilizing the graph structure and domain randomization enabled by the equivalence mappings. We conducted experiments with simulated oracles and with human subjects. The results show that GEM can drastically improve the generalizability of the learned goal representations over strong baselines.} }
Endnote
%0 Conference Paper %T Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning %A Aviv Netanyahu %A Tianmin Shu %A Joshua Tenenbaum %A Pulkit Agrawal %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-netanyahu22a %I PMLR %P 16480--16495 %U https://proceedings.mlr.press/v162/netanyahu22a.html %V 162 %X In this work, we consider one-shot imitation learning for object rearrangement tasks, where an AI agent needs to watch a single expert demonstration and learn to perform the same task in different environments. To achieve a strong generalization, the AI agent must infer the spatial goal specification for the task. However, there can be multiple goal specifications that fit the given demonstration. To address this, we propose a reward learning approach, Graph-based Equivalence Mappings (GEM), that can discover spatial goal representations that are aligned with the intended goal specification, enabling successful generalization in unseen environments. Specifically, GEM represents a spatial goal specification by a reward function conditioned on i) a graph indicating important spatial relationships between objects and ii) state equivalence mappings for each edge in the graph indicating invariant properties of the corresponding relationship. GEM combines inverse reinforcement learning and active reward learning to efficiently improve the reward function by utilizing the graph structure and domain randomization enabled by the equivalence mappings. We conducted experiments with simulated oracles and with human subjects. The results show that GEM can drastically improve the generalizability of the learned goal representations over strong baselines.
APA
Netanyahu, A., Shu, T., Tenenbaum, J. & Agrawal, P.. (2022). Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:16480-16495 Available from https://proceedings.mlr.press/v162/netanyahu22a.html.

Related Material