TempoRL: Learning When to Act

André Biedenkapp, Raghu Rajan, Frank Hutter, Marius Lindauer
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:914-924, 2021.

Abstract

Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment. However, behaviours are usually learned in a purely reactive fashion, where an appropriate action is selected based on an observation. In this form, it is challenging to learn when it is necessary to execute new decisions. This makes learning inefficient especially in environments that need various degrees of fine and coarse control. To address this, we propose a proactive setting in which the agent not only selects an action in a state but also for how long to commit to that action. Our TempoRL approach introduces skip connections between states and learns a skip-policy for repeating the same action along these skips. We demonstrate the effectiveness of TempoRL on a variety of traditional and deep RL environments, showing that our approach is capable of learning successful policies up to an order of magnitude faster than vanilla Q-learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-biedenkapp21a, title = {TempoRL: Learning When to Act}, author = {Biedenkapp, Andr{\'e} and Rajan, Raghu and Hutter, Frank and Lindauer, Marius}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {914--924}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/biedenkapp21a/biedenkapp21a.pdf}, url = {https://proceedings.mlr.press/v139/biedenkapp21a.html}, abstract = {Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment. However, behaviours are usually learned in a purely reactive fashion, where an appropriate action is selected based on an observation. In this form, it is challenging to learn when it is necessary to execute new decisions. This makes learning inefficient especially in environments that need various degrees of fine and coarse control. To address this, we propose a proactive setting in which the agent not only selects an action in a state but also for how long to commit to that action. Our TempoRL approach introduces skip connections between states and learns a skip-policy for repeating the same action along these skips. We demonstrate the effectiveness of TempoRL on a variety of traditional and deep RL environments, showing that our approach is capable of learning successful policies up to an order of magnitude faster than vanilla Q-learning.} }
Endnote
%0 Conference Paper %T TempoRL: Learning When to Act %A André Biedenkapp %A Raghu Rajan %A Frank Hutter %A Marius Lindauer %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-biedenkapp21a %I PMLR %P 914--924 %U https://proceedings.mlr.press/v139/biedenkapp21a.html %V 139 %X Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment. However, behaviours are usually learned in a purely reactive fashion, where an appropriate action is selected based on an observation. In this form, it is challenging to learn when it is necessary to execute new decisions. This makes learning inefficient especially in environments that need various degrees of fine and coarse control. To address this, we propose a proactive setting in which the agent not only selects an action in a state but also for how long to commit to that action. Our TempoRL approach introduces skip connections between states and learns a skip-policy for repeating the same action along these skips. We demonstrate the effectiveness of TempoRL on a variety of traditional and deep RL environments, showing that our approach is capable of learning successful policies up to an order of magnitude faster than vanilla Q-learning.
APA
Biedenkapp, A., Rajan, R., Hutter, F. & Lindauer, M.. (2021). TempoRL: Learning When to Act. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:914-924 Available from https://proceedings.mlr.press/v139/biedenkapp21a.html.

Related Material