AnyMorph: Learning Transferable Polices By Inferring Agent Morphology

Brandon Trabucco, Mariano Phielipp, Glen Berseth
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:21677-21691, 2022.

Abstract

The prototypical approach to reinforcement learning involves training policies tailored to a particular agent from scratch for every new morphology. Recent work aims to eliminate the re-training of policies by investigating whether a morphology-agnostic policy, trained on a diverse set of agents with similar task objectives, can be transferred to new agents with unseen morphologies without re-training. This is a challenging problem that required previous approaches to use hand-designed descriptions of the new agent’s morphology. Instead of hand-designing this description, we propose a data-driven method that learns a representation of morphology directly from the reinforcement learning objective. Ours is the first reinforcement learning algorithm that can train a policy to generalize to new agent morphologies without requiring a description of the agent’s morphology in advance. We evaluate our approach on the standard benchmark for agent-agnostic control, and improve over the current state of the art in zero-shot generalization to new agents. Importantly, our method attains good performance without an explicit description of morphology.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-trabucco22b, title = {{A}ny{M}orph: Learning Transferable Polices By Inferring Agent Morphology}, author = {Trabucco, Brandon and Phielipp, Mariano and Berseth, Glen}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {21677--21691}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/trabucco22b/trabucco22b.pdf}, url = {https://proceedings.mlr.press/v162/trabucco22b.html}, abstract = {The prototypical approach to reinforcement learning involves training policies tailored to a particular agent from scratch for every new morphology. Recent work aims to eliminate the re-training of policies by investigating whether a morphology-agnostic policy, trained on a diverse set of agents with similar task objectives, can be transferred to new agents with unseen morphologies without re-training. This is a challenging problem that required previous approaches to use hand-designed descriptions of the new agent’s morphology. Instead of hand-designing this description, we propose a data-driven method that learns a representation of morphology directly from the reinforcement learning objective. Ours is the first reinforcement learning algorithm that can train a policy to generalize to new agent morphologies without requiring a description of the agent’s morphology in advance. We evaluate our approach on the standard benchmark for agent-agnostic control, and improve over the current state of the art in zero-shot generalization to new agents. Importantly, our method attains good performance without an explicit description of morphology.} }
Endnote
%0 Conference Paper %T AnyMorph: Learning Transferable Polices By Inferring Agent Morphology %A Brandon Trabucco %A Mariano Phielipp %A Glen Berseth %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-trabucco22b %I PMLR %P 21677--21691 %U https://proceedings.mlr.press/v162/trabucco22b.html %V 162 %X The prototypical approach to reinforcement learning involves training policies tailored to a particular agent from scratch for every new morphology. Recent work aims to eliminate the re-training of policies by investigating whether a morphology-agnostic policy, trained on a diverse set of agents with similar task objectives, can be transferred to new agents with unseen morphologies without re-training. This is a challenging problem that required previous approaches to use hand-designed descriptions of the new agent’s morphology. Instead of hand-designing this description, we propose a data-driven method that learns a representation of morphology directly from the reinforcement learning objective. Ours is the first reinforcement learning algorithm that can train a policy to generalize to new agent morphologies without requiring a description of the agent’s morphology in advance. We evaluate our approach on the standard benchmark for agent-agnostic control, and improve over the current state of the art in zero-shot generalization to new agents. Importantly, our method attains good performance without an explicit description of morphology.
APA
Trabucco, B., Phielipp, M. & Berseth, G.. (2022). AnyMorph: Learning Transferable Polices By Inferring Agent Morphology. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:21677-21691 Available from https://proceedings.mlr.press/v162/trabucco22b.html.

Related Material