Hyperparameter Selection for Imitation Learning

Léonard Hussenot, Marcin Andrychowicz, Damien Vincent, Robert Dadashi, Anton Raichuk, Sabela Ramos, Nikola Momchev, Sertan Girgin, Raphael Marinier, Lukasz Stafiniak, Manu Orsini, Olivier Bachem, Matthieu Geist, Olivier Pietquin
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:4511-4522, 2021.

Abstract

We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10’000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-hussenot21a, title = {Hyperparameter Selection for Imitation Learning}, author = {Hussenot, L{\'e}onard and Andrychowicz, Marcin and Vincent, Damien and Dadashi, Robert and Raichuk, Anton and Ramos, Sabela and Momchev, Nikola and Girgin, Sertan and Marinier, Raphael and Stafiniak, Lukasz and Orsini, Manu and Bachem, Olivier and Geist, Matthieu and Pietquin, Olivier}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {4511--4522}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/hussenot21a/hussenot21a.pdf}, url = {https://proceedings.mlr.press/v139/hussenot21a.html}, abstract = {We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10’000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.} }
Endnote
%0 Conference Paper %T Hyperparameter Selection for Imitation Learning %A Léonard Hussenot %A Marcin Andrychowicz %A Damien Vincent %A Robert Dadashi %A Anton Raichuk %A Sabela Ramos %A Nikola Momchev %A Sertan Girgin %A Raphael Marinier %A Lukasz Stafiniak %A Manu Orsini %A Olivier Bachem %A Matthieu Geist %A Olivier Pietquin %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-hussenot21a %I PMLR %P 4511--4522 %U https://proceedings.mlr.press/v139/hussenot21a.html %V 139 %X We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10’000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.
APA
Hussenot, L., Andrychowicz, M., Vincent, D., Dadashi, R., Raichuk, A., Ramos, S., Momchev, N., Girgin, S., Marinier, R., Stafiniak, L., Orsini, M., Bachem, O., Geist, M. & Pietquin, O.. (2021). Hyperparameter Selection for Imitation Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:4511-4522 Available from https://proceedings.mlr.press/v139/hussenot21a.html.

Related Material