Dead-ends and Secure Exploration in Reinforcement Learning

Mehdi Fatemi, Shikhar Sharma, Harm Van Seijen, Samira Ebrahimi Kahou
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1873-1881, 2019.

Abstract

Many interesting applications of reinforcement learning (RL) involve MDPs that include numerous “dead-end" states. Upon reaching a dead-end state, the agent continues to interact with the environment in a dead-end trajectory before reaching an undesired terminal state, regardless of whatever actions are chosen. The situation is even worse when existence of many dead-end states is coupled with distant positive rewards from any initial state (we term this as Bridge Effect). Hence, conventional exploration techniques often incur prohibitively many training steps before convergence. To deal with the bridge effect, we propose a condition for exploration, called security. We next establish formal results that translate the security condition into the learning problem of an auxiliary value function. This new value function is used to cap “any" given exploration policy and is guaranteed to make it secure. As a special case, we use this theory and introduce secure random-walk. We next extend our results to the deep RL settings by identifying and addressing two main challenges that arise. Finally, we empirically compare secure random-walk with standard benchmarks in two sets of experiments including the Atari game of Montezuma’s Revenge.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-fatemi19a, title = {Dead-ends and Secure Exploration in Reinforcement Learning}, author = {Fatemi, Mehdi and Sharma, Shikhar and Van Seijen, Harm and Kahou, Samira Ebrahimi}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1873--1881}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/fatemi19a/fatemi19a.pdf}, url = {https://proceedings.mlr.press/v97/fatemi19a.html}, abstract = {Many interesting applications of reinforcement learning (RL) involve MDPs that include numerous “dead-end" states. Upon reaching a dead-end state, the agent continues to interact with the environment in a dead-end trajectory before reaching an undesired terminal state, regardless of whatever actions are chosen. The situation is even worse when existence of many dead-end states is coupled with distant positive rewards from any initial state (we term this as Bridge Effect). Hence, conventional exploration techniques often incur prohibitively many training steps before convergence. To deal with the bridge effect, we propose a condition for exploration, called security. We next establish formal results that translate the security condition into the learning problem of an auxiliary value function. This new value function is used to cap “any" given exploration policy and is guaranteed to make it secure. As a special case, we use this theory and introduce secure random-walk. We next extend our results to the deep RL settings by identifying and addressing two main challenges that arise. Finally, we empirically compare secure random-walk with standard benchmarks in two sets of experiments including the Atari game of Montezuma’s Revenge.} }
Endnote
%0 Conference Paper %T Dead-ends and Secure Exploration in Reinforcement Learning %A Mehdi Fatemi %A Shikhar Sharma %A Harm Van Seijen %A Samira Ebrahimi Kahou %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-fatemi19a %I PMLR %P 1873--1881 %U https://proceedings.mlr.press/v97/fatemi19a.html %V 97 %X Many interesting applications of reinforcement learning (RL) involve MDPs that include numerous “dead-end" states. Upon reaching a dead-end state, the agent continues to interact with the environment in a dead-end trajectory before reaching an undesired terminal state, regardless of whatever actions are chosen. The situation is even worse when existence of many dead-end states is coupled with distant positive rewards from any initial state (we term this as Bridge Effect). Hence, conventional exploration techniques often incur prohibitively many training steps before convergence. To deal with the bridge effect, we propose a condition for exploration, called security. We next establish formal results that translate the security condition into the learning problem of an auxiliary value function. This new value function is used to cap “any" given exploration policy and is guaranteed to make it secure. As a special case, we use this theory and introduce secure random-walk. We next extend our results to the deep RL settings by identifying and addressing two main challenges that arise. Finally, we empirically compare secure random-walk with standard benchmarks in two sets of experiments including the Atari game of Montezuma’s Revenge.
APA
Fatemi, M., Sharma, S., Van Seijen, H. & Kahou, S.E.. (2019). Dead-ends and Secure Exploration in Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1873-1881 Available from https://proceedings.mlr.press/v97/fatemi19a.html.

Related Material