Generalization and Exploration via Randomized Value Functions

Ian Osband, Benjamin Van Roy, Zheng Wen
; Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2377-2386, 2016.

Abstract

We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or epsilon-greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates near-optimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-osband16, title = {Generalization and Exploration via Randomized Value Functions}, author = {Ian Osband and Benjamin Van Roy and Zheng Wen}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2377--2386}, year = {2016}, editor = {Maria Florina Balcan and Kilian Q. Weinberger}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/osband16.pdf}, url = {http://proceedings.mlr.press/v48/osband16.html}, abstract = {We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or epsilon-greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates near-optimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.} }
Endnote
%0 Conference Paper %T Generalization and Exploration via Randomized Value Functions %A Ian Osband %A Benjamin Van Roy %A Zheng Wen %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-osband16 %I PMLR %J Proceedings of Machine Learning Research %P 2377--2386 %U http://proceedings.mlr.press %V 48 %W PMLR %X We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or epsilon-greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates near-optimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.
RIS
TY - CPAPER TI - Generalization and Exploration via Randomized Value Functions AU - Ian Osband AU - Benjamin Van Roy AU - Zheng Wen BT - Proceedings of The 33rd International Conference on Machine Learning PY - 2016/06/11 DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-osband16 PB - PMLR SP - 2377 DP - PMLR EP - 2386 L1 - http://proceedings.mlr.press/v48/osband16.pdf UR - http://proceedings.mlr.press/v48/osband16.html AB - We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or epsilon-greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates near-optimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization. ER -
APA
Osband, I., Roy, B.V. & Wen, Z.. (2016). Generalization and Exploration via Randomized Value Functions. Proceedings of The 33rd International Conference on Machine Learning, in PMLR 48:2377-2386

Related Material