Uncertainty-Aware Reward-Free Exploration with General Function Approximation

Junkai Zhang, Weitong Zhang, Dongruo Zhou, Quanquan Gu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:60414-60445, 2024.

Abstract

Mastering multiple tasks through exploration and learning in an environment poses a significant challenge in reinforcement learning (RL). Unsupervised RL has been introduced to address this challenge by training policies with intrinsic rewards rather than extrinsic rewards. However, current intrinsic reward designs and unsupervised RL algorithms often overlook the heterogeneous nature of collected samples, thereby diminishing their sample efficiency. To overcome this limitation, in this paper, we proposed a reward-free RL algorithm called GFA-RFE. The key idea behind our algorithm is an uncertainty-aware intrinsic reward for exploring the environment and an uncertainty-weighted learning process to handle heterogeneous uncertainty in different samples. Theoretically, we show that in order to find an $\epsilon$-optimal policy, GFA-RFE needs to collect $\tilde{O} (H^2 \log N_{\mathcal{F}} (\epsilon) \text{dim} (\mathcal{F}) / \epsilon^2 )$ number of episodes, where $\mathcal{F}$ is the value function class with covering number $N_{\mathcal{F}} (\epsilon)$ and generalized eluder dimension $\text{dim} (\mathcal{F})$. Such a result outperforms all existing reward-free RL algorithms. We further implement and evaluate GFA-RFE across various domains and tasks in the DeepMind Control Suite. Experiment results show that GFA-RFE outperforms or is comparable to the performance of state-of-the-art unsupervised RL algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhang24ci, title = {Uncertainty-Aware Reward-Free Exploration with General Function Approximation}, author = {Zhang, Junkai and Zhang, Weitong and Zhou, Dongruo and Gu, Quanquan}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {60414--60445}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24ci/zhang24ci.pdf}, url = {https://proceedings.mlr.press/v235/zhang24ci.html}, abstract = {Mastering multiple tasks through exploration and learning in an environment poses a significant challenge in reinforcement learning (RL). Unsupervised RL has been introduced to address this challenge by training policies with intrinsic rewards rather than extrinsic rewards. However, current intrinsic reward designs and unsupervised RL algorithms often overlook the heterogeneous nature of collected samples, thereby diminishing their sample efficiency. To overcome this limitation, in this paper, we proposed a reward-free RL algorithm called GFA-RFE. The key idea behind our algorithm is an uncertainty-aware intrinsic reward for exploring the environment and an uncertainty-weighted learning process to handle heterogeneous uncertainty in different samples. Theoretically, we show that in order to find an $\epsilon$-optimal policy, GFA-RFE needs to collect $\tilde{O} (H^2 \log N_{\mathcal{F}} (\epsilon) \text{dim} (\mathcal{F}) / \epsilon^2 )$ number of episodes, where $\mathcal{F}$ is the value function class with covering number $N_{\mathcal{F}} (\epsilon)$ and generalized eluder dimension $\text{dim} (\mathcal{F})$. Such a result outperforms all existing reward-free RL algorithms. We further implement and evaluate GFA-RFE across various domains and tasks in the DeepMind Control Suite. Experiment results show that GFA-RFE outperforms or is comparable to the performance of state-of-the-art unsupervised RL algorithms.} }
Endnote
%0 Conference Paper %T Uncertainty-Aware Reward-Free Exploration with General Function Approximation %A Junkai Zhang %A Weitong Zhang %A Dongruo Zhou %A Quanquan Gu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhang24ci %I PMLR %P 60414--60445 %U https://proceedings.mlr.press/v235/zhang24ci.html %V 235 %X Mastering multiple tasks through exploration and learning in an environment poses a significant challenge in reinforcement learning (RL). Unsupervised RL has been introduced to address this challenge by training policies with intrinsic rewards rather than extrinsic rewards. However, current intrinsic reward designs and unsupervised RL algorithms often overlook the heterogeneous nature of collected samples, thereby diminishing their sample efficiency. To overcome this limitation, in this paper, we proposed a reward-free RL algorithm called GFA-RFE. The key idea behind our algorithm is an uncertainty-aware intrinsic reward for exploring the environment and an uncertainty-weighted learning process to handle heterogeneous uncertainty in different samples. Theoretically, we show that in order to find an $\epsilon$-optimal policy, GFA-RFE needs to collect $\tilde{O} (H^2 \log N_{\mathcal{F}} (\epsilon) \text{dim} (\mathcal{F}) / \epsilon^2 )$ number of episodes, where $\mathcal{F}$ is the value function class with covering number $N_{\mathcal{F}} (\epsilon)$ and generalized eluder dimension $\text{dim} (\mathcal{F})$. Such a result outperforms all existing reward-free RL algorithms. We further implement and evaluate GFA-RFE across various domains and tasks in the DeepMind Control Suite. Experiment results show that GFA-RFE outperforms or is comparable to the performance of state-of-the-art unsupervised RL algorithms.
APA
Zhang, J., Zhang, W., Zhou, D. & Gu, Q.. (2024). Uncertainty-Aware Reward-Free Exploration with General Function Approximation. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:60414-60445 Available from https://proceedings.mlr.press/v235/zhang24ci.html.

Related Material