Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning

Thomas Dietterich, George Trimponias, Zhitang Chen
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1262-1270, 2018.

Abstract

Exogenous state variables and rewards can slow down reinforcement learning by injecting uncontrolled variation into the reward signal. We formalize exogenous state variables and rewards and identify conditions under which an MDP with exogenous state can be decomposed into an exogenous Markov Reward Process involving only the exogenous state+reward and an endogenous Markov Decision Process defined with respect to only the endogenous rewards. We also derive a variance-covariance condition under which Monte Carlo policy evaluation on the endogenous MDP is accelerated compared to using the full MDP. Similar speedups are likely to carry over to all RL algorithms. We develop two algorithms for discovering the exogenous variables and test them on several MDPs. Results show that the algorithms are practical and can significantly speed up reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-dietterich18a, title = {Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning}, author = {Dietterich, Thomas and Trimponias, George and Chen, Zhitang}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1262--1270}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/dietterich18a/dietterich18a.pdf}, url = {https://proceedings.mlr.press/v80/dietterich18a.html}, abstract = {Exogenous state variables and rewards can slow down reinforcement learning by injecting uncontrolled variation into the reward signal. We formalize exogenous state variables and rewards and identify conditions under which an MDP with exogenous state can be decomposed into an exogenous Markov Reward Process involving only the exogenous state+reward and an endogenous Markov Decision Process defined with respect to only the endogenous rewards. We also derive a variance-covariance condition under which Monte Carlo policy evaluation on the endogenous MDP is accelerated compared to using the full MDP. Similar speedups are likely to carry over to all RL algorithms. We develop two algorithms for discovering the exogenous variables and test them on several MDPs. Results show that the algorithms are practical and can significantly speed up reinforcement learning.} }
Endnote
%0 Conference Paper %T Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning %A Thomas Dietterich %A George Trimponias %A Zhitang Chen %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-dietterich18a %I PMLR %P 1262--1270 %U https://proceedings.mlr.press/v80/dietterich18a.html %V 80 %X Exogenous state variables and rewards can slow down reinforcement learning by injecting uncontrolled variation into the reward signal. We formalize exogenous state variables and rewards and identify conditions under which an MDP with exogenous state can be decomposed into an exogenous Markov Reward Process involving only the exogenous state+reward and an endogenous Markov Decision Process defined with respect to only the endogenous rewards. We also derive a variance-covariance condition under which Monte Carlo policy evaluation on the endogenous MDP is accelerated compared to using the full MDP. Similar speedups are likely to carry over to all RL algorithms. We develop two algorithms for discovering the exogenous variables and test them on several MDPs. Results show that the algorithms are practical and can significantly speed up reinforcement learning.
APA
Dietterich, T., Trimponias, G. & Chen, Z.. (2018). Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1262-1270 Available from https://proceedings.mlr.press/v80/dietterich18a.html.

Related Material