Configurable Markov Decision Processes

Alberto Maria Metelli, Mirco Mutti, Marcello Restelli
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3491-3500, 2018.

Abstract

In many real-world problems, there is the possibility to configure, to a limited extent, some environmental parameters to improve the performance of a learning agent. In this paper, we propose a novel framework, Configurable Markov Decision Processes (Conf-MDPs), to model this new type of interaction with the environment. Furthermore, we provide a new learning algorithm, Safe Policy-Model Iteration (SPMI), to jointly and adaptively optimize the policy and the environment configuration. After having introduced our approach and derived some theoretical results, we present the experimental evaluation in two explicative problems to show the benefits of the environment configurability on the performance of the learned policy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-metelli18a, title = {Configurable {M}arkov Decision Processes}, author = {Metelli, Alberto Maria and Mutti, Mirco and Restelli, Marcello}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3491--3500}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/metelli18a/metelli18a.pdf}, url = {https://proceedings.mlr.press/v80/metelli18a.html}, abstract = {In many real-world problems, there is the possibility to configure, to a limited extent, some environmental parameters to improve the performance of a learning agent. In this paper, we propose a novel framework, Configurable Markov Decision Processes (Conf-MDPs), to model this new type of interaction with the environment. Furthermore, we provide a new learning algorithm, Safe Policy-Model Iteration (SPMI), to jointly and adaptively optimize the policy and the environment configuration. After having introduced our approach and derived some theoretical results, we present the experimental evaluation in two explicative problems to show the benefits of the environment configurability on the performance of the learned policy.} }
Endnote
%0 Conference Paper %T Configurable Markov Decision Processes %A Alberto Maria Metelli %A Mirco Mutti %A Marcello Restelli %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-metelli18a %I PMLR %P 3491--3500 %U https://proceedings.mlr.press/v80/metelli18a.html %V 80 %X In many real-world problems, there is the possibility to configure, to a limited extent, some environmental parameters to improve the performance of a learning agent. In this paper, we propose a novel framework, Configurable Markov Decision Processes (Conf-MDPs), to model this new type of interaction with the environment. Furthermore, we provide a new learning algorithm, Safe Policy-Model Iteration (SPMI), to jointly and adaptively optimize the policy and the environment configuration. After having introduced our approach and derived some theoretical results, we present the experimental evaluation in two explicative problems to show the benefits of the environment configurability on the performance of the learned policy.
APA
Metelli, A.M., Mutti, M. & Restelli, M.. (2018). Configurable Markov Decision Processes. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3491-3500 Available from https://proceedings.mlr.press/v80/metelli18a.html.

Related Material