DARLA: Improving Zero-Shot Transfer in Reinforcement Learning

Irina Higgins, Arka Pal, Andrei Rusu, Loic Matthey, Christopher Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, Alexander Lerchner
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1480-1490, 2017.

Abstract

Domain adaptation is an important open problem in deep reinforcement learning (RL). In many scenarios of interest data is hard to obtain, so agents may learn a source policy in a setting where data is readily available, with the hope that it generalises well to the target domain. We propose a new multi-stage RL agent, DARLA (DisentAngled Representation Learning Agent), which learns to see before learning to act. DARLA’s vision is based on learning a disentangled representation of the observed environment. Once DARLA can see, it is able to acquire source policies that are robust to many domain shifts – even with no access to the target domain. DARLA significantly outperforms conventional baselines in zero-shot domain adaptation scenarios, an effect that holds across a variety of RL environments (Jaco arm, DeepMind Lab) and base RL algorithms (DQN, A3C and EC).

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-higgins17a, title = {{DARLA}: Improving Zero-Shot Transfer in Reinforcement Learning}, author = {Irina Higgins and Arka Pal and Andrei Rusu and Loic Matthey and Christopher Burgess and Alexander Pritzel and Matthew Botvinick and Charles Blundell and Alexander Lerchner}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1480--1490}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/higgins17a/higgins17a.pdf}, url = {https://proceedings.mlr.press/v70/higgins17a.html}, abstract = {Domain adaptation is an important open problem in deep reinforcement learning (RL). In many scenarios of interest data is hard to obtain, so agents may learn a source policy in a setting where data is readily available, with the hope that it generalises well to the target domain. We propose a new multi-stage RL agent, DARLA (DisentAngled Representation Learning Agent), which learns to see before learning to act. DARLA’s vision is based on learning a disentangled representation of the observed environment. Once DARLA can see, it is able to acquire source policies that are robust to many domain shifts – even with no access to the target domain. DARLA significantly outperforms conventional baselines in zero-shot domain adaptation scenarios, an effect that holds across a variety of RL environments (Jaco arm, DeepMind Lab) and base RL algorithms (DQN, A3C and EC).} }
Endnote
%0 Conference Paper %T DARLA: Improving Zero-Shot Transfer in Reinforcement Learning %A Irina Higgins %A Arka Pal %A Andrei Rusu %A Loic Matthey %A Christopher Burgess %A Alexander Pritzel %A Matthew Botvinick %A Charles Blundell %A Alexander Lerchner %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-higgins17a %I PMLR %P 1480--1490 %U https://proceedings.mlr.press/v70/higgins17a.html %V 70 %X Domain adaptation is an important open problem in deep reinforcement learning (RL). In many scenarios of interest data is hard to obtain, so agents may learn a source policy in a setting where data is readily available, with the hope that it generalises well to the target domain. We propose a new multi-stage RL agent, DARLA (DisentAngled Representation Learning Agent), which learns to see before learning to act. DARLA’s vision is based on learning a disentangled representation of the observed environment. Once DARLA can see, it is able to acquire source policies that are robust to many domain shifts – even with no access to the target domain. DARLA significantly outperforms conventional baselines in zero-shot domain adaptation scenarios, an effect that holds across a variety of RL environments (Jaco arm, DeepMind Lab) and base RL algorithms (DQN, A3C and EC).
APA
Higgins, I., Pal, A., Rusu, A., Matthey, L., Burgess, C., Pritzel, A., Botvinick, M., Blundell, C. & Lerchner, A.. (2017). DARLA: Improving Zero-Shot Transfer in Reinforcement Learning. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1480-1490 Available from https://proceedings.mlr.press/v70/higgins17a.html.

Related Material