PAC-inspired Option Discovery in Lifelong Reinforcement Learning

Emma Brunskill, Lihong Li
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):316-324, 2014.

Abstract

A key goal of AI is to create lifelong learning agents that can leverage prior experience to improve performance on later tasks. In reinforcement-learning problems, one way to summarize prior experience for future use is through options, which are temporally extended actions (subpolicies) for how to behave. Options can then be used to potentially accelerate learning in new reinforcement learning tasks. In this work, we provide the first formal analysis of the sample complexity, a measure of learning speed, of reinforcement learning with options. This analysis helps shed light on some interesting prior empirical results on when and how options may accelerate learning. We then quantify the benefit of options in reducing sample complexity of a lifelong learning agent. Finally, the new theoretical insights inspire a novel option-discovery algorithm that aims at minimizing overall sample complexity in lifelong reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-brunskill14, title = {PAC-inspired Option Discovery in Lifelong Reinforcement Learning}, author = {Brunskill, Emma and Li, Lihong}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {316--324}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/brunskill14.pdf}, url = {https://proceedings.mlr.press/v32/brunskill14.html}, abstract = {A key goal of AI is to create lifelong learning agents that can leverage prior experience to improve performance on later tasks. In reinforcement-learning problems, one way to summarize prior experience for future use is through options, which are temporally extended actions (subpolicies) for how to behave. Options can then be used to potentially accelerate learning in new reinforcement learning tasks. In this work, we provide the first formal analysis of the sample complexity, a measure of learning speed, of reinforcement learning with options. This analysis helps shed light on some interesting prior empirical results on when and how options may accelerate learning. We then quantify the benefit of options in reducing sample complexity of a lifelong learning agent. Finally, the new theoretical insights inspire a novel option-discovery algorithm that aims at minimizing overall sample complexity in lifelong reinforcement learning.} }
Endnote
%0 Conference Paper %T PAC-inspired Option Discovery in Lifelong Reinforcement Learning %A Emma Brunskill %A Lihong Li %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-brunskill14 %I PMLR %P 316--324 %U https://proceedings.mlr.press/v32/brunskill14.html %V 32 %N 2 %X A key goal of AI is to create lifelong learning agents that can leverage prior experience to improve performance on later tasks. In reinforcement-learning problems, one way to summarize prior experience for future use is through options, which are temporally extended actions (subpolicies) for how to behave. Options can then be used to potentially accelerate learning in new reinforcement learning tasks. In this work, we provide the first formal analysis of the sample complexity, a measure of learning speed, of reinforcement learning with options. This analysis helps shed light on some interesting prior empirical results on when and how options may accelerate learning. We then quantify the benefit of options in reducing sample complexity of a lifelong learning agent. Finally, the new theoretical insights inspire a novel option-discovery algorithm that aims at minimizing overall sample complexity in lifelong reinforcement learning.
RIS
TY - CPAPER TI - PAC-inspired Option Discovery in Lifelong Reinforcement Learning AU - Emma Brunskill AU - Lihong Li BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-brunskill14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 316 EP - 324 L1 - http://proceedings.mlr.press/v32/brunskill14.pdf UR - https://proceedings.mlr.press/v32/brunskill14.html AB - A key goal of AI is to create lifelong learning agents that can leverage prior experience to improve performance on later tasks. In reinforcement-learning problems, one way to summarize prior experience for future use is through options, which are temporally extended actions (subpolicies) for how to behave. Options can then be used to potentially accelerate learning in new reinforcement learning tasks. In this work, we provide the first formal analysis of the sample complexity, a measure of learning speed, of reinforcement learning with options. This analysis helps shed light on some interesting prior empirical results on when and how options may accelerate learning. We then quantify the benefit of options in reducing sample complexity of a lifelong learning agent. Finally, the new theoretical insights inspire a novel option-discovery algorithm that aims at minimizing overall sample complexity in lifelong reinforcement learning. ER -
APA
Brunskill, E. & Li, L.. (2014). PAC-inspired Option Discovery in Lifelong Reinforcement Learning. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):316-324 Available from https://proceedings.mlr.press/v32/brunskill14.html.

Related Material