Optimizing for the Future in Non-Stationary MDPs

Yash Chandak, Georgios Theocharous, Shiv Shankar, Martha White, Sridhar Mahadevan, Philip Thomas
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1414-1425, 2020.

Abstract

Most reinforcement learning methods are based upon the key assumption that the transition dynamics and reward functions are fixed, that is, the underlying Markov decision process is stationary. However, in many real-world applications, this assumption is violated, and using existing algorithms may result in a performance lag. To proactively search for a good future policy, we present a policy gradient algorithm that maximizes a forecast of future performance. This forecast is obtained by fitting a curve to the counter-factual estimates of policy performance over time, without explicitly modeling the underlying non-stationarity. The resulting algorithm amounts to a non-uniform reweighting of past data, and we observe that minimizing performance over some of the data from past episodes can be beneficial when searching for a policy that maximizes future performance. We show that our algorithm, called Prognosticator, is more robust to non-stationarity than two online adaptation techniques, on three simulated problems motivated by real-world applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-chandak20a, title = {Optimizing for the Future in Non-Stationary {MDP}s}, author = {Chandak, Yash and Theocharous, Georgios and Shankar, Shiv and White, Martha and Mahadevan, Sridhar and Thomas, Philip}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1414--1425}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/chandak20a/chandak20a.pdf}, url = {https://proceedings.mlr.press/v119/chandak20a.html}, abstract = {Most reinforcement learning methods are based upon the key assumption that the transition dynamics and reward functions are fixed, that is, the underlying Markov decision process is stationary. However, in many real-world applications, this assumption is violated, and using existing algorithms may result in a performance lag. To proactively search for a good future policy, we present a policy gradient algorithm that maximizes a forecast of future performance. This forecast is obtained by fitting a curve to the counter-factual estimates of policy performance over time, without explicitly modeling the underlying non-stationarity. The resulting algorithm amounts to a non-uniform reweighting of past data, and we observe that minimizing performance over some of the data from past episodes can be beneficial when searching for a policy that maximizes future performance. We show that our algorithm, called Prognosticator, is more robust to non-stationarity than two online adaptation techniques, on three simulated problems motivated by real-world applications.} }
Endnote
%0 Conference Paper %T Optimizing for the Future in Non-Stationary MDPs %A Yash Chandak %A Georgios Theocharous %A Shiv Shankar %A Martha White %A Sridhar Mahadevan %A Philip Thomas %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-chandak20a %I PMLR %P 1414--1425 %U https://proceedings.mlr.press/v119/chandak20a.html %V 119 %X Most reinforcement learning methods are based upon the key assumption that the transition dynamics and reward functions are fixed, that is, the underlying Markov decision process is stationary. However, in many real-world applications, this assumption is violated, and using existing algorithms may result in a performance lag. To proactively search for a good future policy, we present a policy gradient algorithm that maximizes a forecast of future performance. This forecast is obtained by fitting a curve to the counter-factual estimates of policy performance over time, without explicitly modeling the underlying non-stationarity. The resulting algorithm amounts to a non-uniform reweighting of past data, and we observe that minimizing performance over some of the data from past episodes can be beneficial when searching for a policy that maximizes future performance. We show that our algorithm, called Prognosticator, is more robust to non-stationarity than two online adaptation techniques, on three simulated problems motivated by real-world applications.
APA
Chandak, Y., Theocharous, G., Shankar, S., White, M., Mahadevan, S. & Thomas, P.. (2020). Optimizing for the Future in Non-Stationary MDPs. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1414-1425 Available from https://proceedings.mlr.press/v119/chandak20a.html.

Related Material