Principled Exploration via Optimistic Bootstrapping and Backward Induction

Chenjia Bai, Lingxiao Wang, Lei Han, Jianye Hao, Animesh Garg, Peng Liu, Zhaoran Wang
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:577-587, 2021.

Abstract

One principled approach for provably efficient exploration is incorporating the upper confidence bound (UCB) into the value function as a bonus. However, UCB is specified to deal with linear and tabular settings and is incompatible with Deep Reinforcement Learning (DRL). In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I). OB2I constructs a general-purpose UCB-bonus through non-parametric bootstrap in DRL. The UCB-bonus estimates the epistemic uncertainty of state-action pairs for optimistic exploration. We build theoretical connections between the proposed UCB-bonus and the LSVI-UCB in linear setting. We propagate future uncertainty in a time-consistent manner through episodic backward update, which exploits the theoretical advantage and empirically improves the sample-efficiency. Our experiments in MNIST maze and Atari suit suggest that OB2I outperforms several state-of-the-art exploration approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-bai21d, title = {Principled Exploration via Optimistic Bootstrapping and Backward Induction}, author = {Bai, Chenjia and Wang, Lingxiao and Han, Lei and Hao, Jianye and Garg, Animesh and Liu, Peng and Wang, Zhaoran}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {577--587}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/bai21d/bai21d.pdf}, url = {https://proceedings.mlr.press/v139/bai21d.html}, abstract = {One principled approach for provably efficient exploration is incorporating the upper confidence bound (UCB) into the value function as a bonus. However, UCB is specified to deal with linear and tabular settings and is incompatible with Deep Reinforcement Learning (DRL). In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I). OB2I constructs a general-purpose UCB-bonus through non-parametric bootstrap in DRL. The UCB-bonus estimates the epistemic uncertainty of state-action pairs for optimistic exploration. We build theoretical connections between the proposed UCB-bonus and the LSVI-UCB in linear setting. We propagate future uncertainty in a time-consistent manner through episodic backward update, which exploits the theoretical advantage and empirically improves the sample-efficiency. Our experiments in MNIST maze and Atari suit suggest that OB2I outperforms several state-of-the-art exploration approaches.} }
Endnote
%0 Conference Paper %T Principled Exploration via Optimistic Bootstrapping and Backward Induction %A Chenjia Bai %A Lingxiao Wang %A Lei Han %A Jianye Hao %A Animesh Garg %A Peng Liu %A Zhaoran Wang %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-bai21d %I PMLR %P 577--587 %U https://proceedings.mlr.press/v139/bai21d.html %V 139 %X One principled approach for provably efficient exploration is incorporating the upper confidence bound (UCB) into the value function as a bonus. However, UCB is specified to deal with linear and tabular settings and is incompatible with Deep Reinforcement Learning (DRL). In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I). OB2I constructs a general-purpose UCB-bonus through non-parametric bootstrap in DRL. The UCB-bonus estimates the epistemic uncertainty of state-action pairs for optimistic exploration. We build theoretical connections between the proposed UCB-bonus and the LSVI-UCB in linear setting. We propagate future uncertainty in a time-consistent manner through episodic backward update, which exploits the theoretical advantage and empirically improves the sample-efficiency. Our experiments in MNIST maze and Atari suit suggest that OB2I outperforms several state-of-the-art exploration approaches.
APA
Bai, C., Wang, L., Han, L., Hao, J., Garg, A., Liu, P. & Wang, Z.. (2021). Principled Exploration via Optimistic Bootstrapping and Backward Induction. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:577-587 Available from https://proceedings.mlr.press/v139/bai21d.html.

Related Material