Online Prototype Alignment for Few-shot Policy Transfer

Qi Yi, Rui Zhang, Shaohui Peng, Jiaming Guo, Yunkai Gao, Kaizhao Yuan, Ruizhi Chen, Siming Lan, Xing Hu, Zidong Du, Xishan Zhang, Qi Guo, Yunji Chen
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:39968-39983, 2023.

Abstract

Domain adaptation in RL mainly deals with the changes of observation when transferring the policy to a new environment. Many traditional approaches of domain adaptation in RL manage to learn a mapping function between the source and target domain in explicit or implicit ways. However, they typically require access to abundant data from the target domain. Besides, they often rely on visual clues to learn the mapping function and may fail when the source domain looks quite different from the target domain. To address these problems, in this paper, we propose a novel framework Online Prototype Alignment (OPA) to learn the mapping function based on the functional similarity of elements and is able to achieve few-shot policy transfer within only several episodes. The key insight of OPA is to introduce an exploration mechanism that can interact with the unseen elements of the target domain in an efficient and purposeful manner, and then connect them with the seen elements in the source domain according to their functionalities (instead of visual clues). Experimental results show that when the target domain looks visually different from the source domain, OPA can achieve better transfer performance even with much fewer samples from the target domain, outperforming prior methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-yi23b, title = {Online Prototype Alignment for Few-shot Policy Transfer}, author = {Yi, Qi and Zhang, Rui and Peng, Shaohui and Guo, Jiaming and Gao, Yunkai and Yuan, Kaizhao and Chen, Ruizhi and Lan, Siming and Hu, Xing and Du, Zidong and Zhang, Xishan and Guo, Qi and Chen, Yunji}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {39968--39983}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/yi23b/yi23b.pdf}, url = {https://proceedings.mlr.press/v202/yi23b.html}, abstract = {Domain adaptation in RL mainly deals with the changes of observation when transferring the policy to a new environment. Many traditional approaches of domain adaptation in RL manage to learn a mapping function between the source and target domain in explicit or implicit ways. However, they typically require access to abundant data from the target domain. Besides, they often rely on visual clues to learn the mapping function and may fail when the source domain looks quite different from the target domain. To address these problems, in this paper, we propose a novel framework Online Prototype Alignment (OPA) to learn the mapping function based on the functional similarity of elements and is able to achieve few-shot policy transfer within only several episodes. The key insight of OPA is to introduce an exploration mechanism that can interact with the unseen elements of the target domain in an efficient and purposeful manner, and then connect them with the seen elements in the source domain according to their functionalities (instead of visual clues). Experimental results show that when the target domain looks visually different from the source domain, OPA can achieve better transfer performance even with much fewer samples from the target domain, outperforming prior methods.} }
Endnote
%0 Conference Paper %T Online Prototype Alignment for Few-shot Policy Transfer %A Qi Yi %A Rui Zhang %A Shaohui Peng %A Jiaming Guo %A Yunkai Gao %A Kaizhao Yuan %A Ruizhi Chen %A Siming Lan %A Xing Hu %A Zidong Du %A Xishan Zhang %A Qi Guo %A Yunji Chen %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-yi23b %I PMLR %P 39968--39983 %U https://proceedings.mlr.press/v202/yi23b.html %V 202 %X Domain adaptation in RL mainly deals with the changes of observation when transferring the policy to a new environment. Many traditional approaches of domain adaptation in RL manage to learn a mapping function between the source and target domain in explicit or implicit ways. However, they typically require access to abundant data from the target domain. Besides, they often rely on visual clues to learn the mapping function and may fail when the source domain looks quite different from the target domain. To address these problems, in this paper, we propose a novel framework Online Prototype Alignment (OPA) to learn the mapping function based on the functional similarity of elements and is able to achieve few-shot policy transfer within only several episodes. The key insight of OPA is to introduce an exploration mechanism that can interact with the unseen elements of the target domain in an efficient and purposeful manner, and then connect them with the seen elements in the source domain according to their functionalities (instead of visual clues). Experimental results show that when the target domain looks visually different from the source domain, OPA can achieve better transfer performance even with much fewer samples from the target domain, outperforming prior methods.
APA
Yi, Q., Zhang, R., Peng, S., Guo, J., Gao, Y., Yuan, K., Chen, R., Lan, S., Hu, X., Du, Z., Zhang, X., Guo, Q. & Chen, Y.. (2023). Online Prototype Alignment for Few-shot Policy Transfer. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:39968-39983 Available from https://proceedings.mlr.press/v202/yi23b.html.

Related Material