Mutual Alignment Transfer Learning

Markus Wulfmeier, Ingmar Posner, Pieter Abbeel
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:281-290, 2017.

Abstract

Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, these can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach - supplemental to fine tuning on the real robot - to further benefit from parallel access to a simulator during training. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can adjust to optimize its behaviour for states commonly visited by the real-world agent.

Cite this Paper


BibTeX
@InProceedings{pmlr-v78-wulfmeier17a, title = {Mutual Alignment Transfer Learning}, author = {Wulfmeier, Markus and Posner, Ingmar and Abbeel, Pieter}, booktitle = {Proceedings of the 1st Annual Conference on Robot Learning}, pages = {281--290}, year = {2017}, editor = {Levine, Sergey and Vanhoucke, Vincent and Goldberg, Ken}, volume = {78}, series = {Proceedings of Machine Learning Research}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v78/wulfmeier17a/wulfmeier17a.pdf}, url = {https://proceedings.mlr.press/v78/wulfmeier17a.html}, abstract = {Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, these can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach - supplemental to fine tuning on the real robot - to further benefit from parallel access to a simulator during training. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can adjust to optimize its behaviour for states commonly visited by the real-world agent.} }
Endnote
%0 Conference Paper %T Mutual Alignment Transfer Learning %A Markus Wulfmeier %A Ingmar Posner %A Pieter Abbeel %B Proceedings of the 1st Annual Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2017 %E Sergey Levine %E Vincent Vanhoucke %E Ken Goldberg %F pmlr-v78-wulfmeier17a %I PMLR %P 281--290 %U https://proceedings.mlr.press/v78/wulfmeier17a.html %V 78 %X Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, these can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach - supplemental to fine tuning on the real robot - to further benefit from parallel access to a simulator during training. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can adjust to optimize its behaviour for states commonly visited by the real-world agent.
APA
Wulfmeier, M., Posner, I. & Abbeel, P.. (2017). Mutual Alignment Transfer Learning. Proceedings of the 1st Annual Conference on Robot Learning, in Proceedings of Machine Learning Research 78:281-290 Available from https://proceedings.mlr.press/v78/wulfmeier17a.html.

Related Material