Behavior Contrastive Learning for Unsupervised Skill Discovery

Rushuai Yang, Chenjia Bai, Hongyi Guo, Siyuan Li, Bin Zhao, Zhen Wang, Peng Liu, Xuelong Li
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:39183-39204, 2023.

Abstract

In reinforcement learning, unsupervised skill discovery aims to learn diverse skills without extrinsic rewards. Previous methods discover skills by maximizing the mutual information (MI) between states and skills. However, such an MI objective tends to learn simple and static skills and may hinder exploration. In this paper, we propose a novel unsupervised skill discovery method through contrastive learning among behaviors, which makes the agent produce similar behaviors for the same skill and diverse behaviors for different skills. Under mild assumptions, our objective maximizes the MI between different behaviors based on the same skill, which serves as an upper bound of the previous MI objective. Meanwhile, our method implicitly increases the state entropy to obtain better state coverage. We evaluate our method on challenging mazes and continuous control tasks. The results show that our method generates diverse and far-reaching skills, and also obtains competitive performance in downstream tasks compared to the state-of-the-art methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-yang23a, title = {Behavior Contrastive Learning for Unsupervised Skill Discovery}, author = {Yang, Rushuai and Bai, Chenjia and Guo, Hongyi and Li, Siyuan and Zhao, Bin and Wang, Zhen and Liu, Peng and Li, Xuelong}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {39183--39204}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/yang23a/yang23a.pdf}, url = {https://proceedings.mlr.press/v202/yang23a.html}, abstract = {In reinforcement learning, unsupervised skill discovery aims to learn diverse skills without extrinsic rewards. Previous methods discover skills by maximizing the mutual information (MI) between states and skills. However, such an MI objective tends to learn simple and static skills and may hinder exploration. In this paper, we propose a novel unsupervised skill discovery method through contrastive learning among behaviors, which makes the agent produce similar behaviors for the same skill and diverse behaviors for different skills. Under mild assumptions, our objective maximizes the MI between different behaviors based on the same skill, which serves as an upper bound of the previous MI objective. Meanwhile, our method implicitly increases the state entropy to obtain better state coverage. We evaluate our method on challenging mazes and continuous control tasks. The results show that our method generates diverse and far-reaching skills, and also obtains competitive performance in downstream tasks compared to the state-of-the-art methods.} }
Endnote
%0 Conference Paper %T Behavior Contrastive Learning for Unsupervised Skill Discovery %A Rushuai Yang %A Chenjia Bai %A Hongyi Guo %A Siyuan Li %A Bin Zhao %A Zhen Wang %A Peng Liu %A Xuelong Li %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-yang23a %I PMLR %P 39183--39204 %U https://proceedings.mlr.press/v202/yang23a.html %V 202 %X In reinforcement learning, unsupervised skill discovery aims to learn diverse skills without extrinsic rewards. Previous methods discover skills by maximizing the mutual information (MI) between states and skills. However, such an MI objective tends to learn simple and static skills and may hinder exploration. In this paper, we propose a novel unsupervised skill discovery method through contrastive learning among behaviors, which makes the agent produce similar behaviors for the same skill and diverse behaviors for different skills. Under mild assumptions, our objective maximizes the MI between different behaviors based on the same skill, which serves as an upper bound of the previous MI objective. Meanwhile, our method implicitly increases the state entropy to obtain better state coverage. We evaluate our method on challenging mazes and continuous control tasks. The results show that our method generates diverse and far-reaching skills, and also obtains competitive performance in downstream tasks compared to the state-of-the-art methods.
APA
Yang, R., Bai, C., Guo, H., Li, S., Zhao, B., Wang, Z., Liu, P. & Li, X.. (2023). Behavior Contrastive Learning for Unsupervised Skill Discovery. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:39183-39204 Available from https://proceedings.mlr.press/v202/yang23a.html.

Related Material