Delayed Reinforcement Learning by Imitation

Pierre Liotet, Davide Maran, Lorenzo Bisi, Marcello Restelli
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:13528-13556, 2022.

Abstract

When the agent’s observations or interactions are delayed, classic reinforcement learning tools usually fail. In this paper, we propose a simple yet new and efficient solution to this problem. We assume that, in the undelayed environment, an efficient policy is known or can be easily learnt, but the task may suffer from delays in practice and we thus want to take them into account. We present a novel algorithm, Delayed Imitation with Dataset Aggregation (DIDA), which builds upon imitation learning methods to learn how to act in a delayed environment from undelayed demonstrations. We provide a theoretical analysis of the approach that will guide the practical design of DIDA. These results are also of general interest in the delayed reinforcement learning literature by providing bounds on the performance between delayed and undelayed tasks, under smoothness conditions. We show empirically that DIDA obtains high performances with a remarkable sample efficiency on a variety of tasks, including robotic locomotion, classic control, and trading.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-liotet22a, title = {Delayed Reinforcement Learning by Imitation}, author = {Liotet, Pierre and Maran, Davide and Bisi, Lorenzo and Restelli, Marcello}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {13528--13556}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/liotet22a/liotet22a.pdf}, url = {https://proceedings.mlr.press/v162/liotet22a.html}, abstract = {When the agent’s observations or interactions are delayed, classic reinforcement learning tools usually fail. In this paper, we propose a simple yet new and efficient solution to this problem. We assume that, in the undelayed environment, an efficient policy is known or can be easily learnt, but the task may suffer from delays in practice and we thus want to take them into account. We present a novel algorithm, Delayed Imitation with Dataset Aggregation (DIDA), which builds upon imitation learning methods to learn how to act in a delayed environment from undelayed demonstrations. We provide a theoretical analysis of the approach that will guide the practical design of DIDA. These results are also of general interest in the delayed reinforcement learning literature by providing bounds on the performance between delayed and undelayed tasks, under smoothness conditions. We show empirically that DIDA obtains high performances with a remarkable sample efficiency on a variety of tasks, including robotic locomotion, classic control, and trading.} }
Endnote
%0 Conference Paper %T Delayed Reinforcement Learning by Imitation %A Pierre Liotet %A Davide Maran %A Lorenzo Bisi %A Marcello Restelli %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-liotet22a %I PMLR %P 13528--13556 %U https://proceedings.mlr.press/v162/liotet22a.html %V 162 %X When the agent’s observations or interactions are delayed, classic reinforcement learning tools usually fail. In this paper, we propose a simple yet new and efficient solution to this problem. We assume that, in the undelayed environment, an efficient policy is known or can be easily learnt, but the task may suffer from delays in practice and we thus want to take them into account. We present a novel algorithm, Delayed Imitation with Dataset Aggregation (DIDA), which builds upon imitation learning methods to learn how to act in a delayed environment from undelayed demonstrations. We provide a theoretical analysis of the approach that will guide the practical design of DIDA. These results are also of general interest in the delayed reinforcement learning literature by providing bounds on the performance between delayed and undelayed tasks, under smoothness conditions. We show empirically that DIDA obtains high performances with a remarkable sample efficiency on a variety of tasks, including robotic locomotion, classic control, and trading.
APA
Liotet, P., Maran, D., Bisi, L. & Restelli, M.. (2022). Delayed Reinforcement Learning by Imitation. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:13528-13556 Available from https://proceedings.mlr.press/v162/liotet22a.html.

Related Material