Discovering Multiple Solutions from a Single Task in Offline Reinforcement Learning

Takayuki Osa, Tatsuya Harada
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:38864-38884, 2024.

Abstract

Recent studies on online reinforcement learning (RL) have demonstrated the advantages of learning multiple behaviors from a single task, as in the case of few-shot adaptation to a new environment. Although this approach is expected to yield similar benefits in offline RL, appropriate methods for learning multiple solutions have not been fully investigated in previous studies. In this study, we therefore addressed the problem of finding multiple solutions from a single task in offline RL. We propose algorithms that can learn multiple solutions in offline RL, and empirically investigate their performance. Our experimental results show that the proposed algorithm learns multiple qualitatively and quantitatively distinctive solutions in offline RL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-osa24a, title = {Discovering Multiple Solutions from a Single Task in Offline Reinforcement Learning}, author = {Osa, Takayuki and Harada, Tatsuya}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {38864--38884}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/osa24a/osa24a.pdf}, url = {https://proceedings.mlr.press/v235/osa24a.html}, abstract = {Recent studies on online reinforcement learning (RL) have demonstrated the advantages of learning multiple behaviors from a single task, as in the case of few-shot adaptation to a new environment. Although this approach is expected to yield similar benefits in offline RL, appropriate methods for learning multiple solutions have not been fully investigated in previous studies. In this study, we therefore addressed the problem of finding multiple solutions from a single task in offline RL. We propose algorithms that can learn multiple solutions in offline RL, and empirically investigate their performance. Our experimental results show that the proposed algorithm learns multiple qualitatively and quantitatively distinctive solutions in offline RL.} }
Endnote
%0 Conference Paper %T Discovering Multiple Solutions from a Single Task in Offline Reinforcement Learning %A Takayuki Osa %A Tatsuya Harada %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-osa24a %I PMLR %P 38864--38884 %U https://proceedings.mlr.press/v235/osa24a.html %V 235 %X Recent studies on online reinforcement learning (RL) have demonstrated the advantages of learning multiple behaviors from a single task, as in the case of few-shot adaptation to a new environment. Although this approach is expected to yield similar benefits in offline RL, appropriate methods for learning multiple solutions have not been fully investigated in previous studies. In this study, we therefore addressed the problem of finding multiple solutions from a single task in offline RL. We propose algorithms that can learn multiple solutions in offline RL, and empirically investigate their performance. Our experimental results show that the proposed algorithm learns multiple qualitatively and quantitatively distinctive solutions in offline RL.
APA
Osa, T. & Harada, T.. (2024). Discovering Multiple Solutions from a Single Task in Offline Reinforcement Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:38864-38884 Available from https://proceedings.mlr.press/v235/osa24a.html.

Related Material