Neural Distillation as a State Representation Bottleneck in Reinforcement Learning

Valentin Guillet, Dennis George Wilson, Emmanuel Rachelson
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:798-818, 2022.

Abstract

Learning a good state representation is a critical skill when dealing with multiple tasks in Reinforcement Learning as it allows for transfer and better generalization between tasks. However, defining what constitute a useful representation is far from simple and there is so far no standard method to find such an encoding. In this paper, we argue that distillation — a process that aims at imitating a set of given policies with a single neural network — can be used to learn a state representation displaying favorable characteristics. In this regard, we define three criteria that measure desirable features of a state encoding: the ability to select important variables in the input space, the ability to efficiently separate states according to their corresponding optimal action, and the robustness of the state encoding on new tasks. We first evaluate these criteria and verify the contribution of distillation on state representation on a toy environment based on the standard inverted pendulum problem, before extending our analysis on more complex visual tasks from the Atari and Procgen benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-guillet22a, title = {Neural Distillation as a State Representation Bottleneck in Reinforcement Learning}, author = {Guillet, Valentin and Wilson, Dennis George and Rachelson, Emmanuel}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {798--818}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/guillet22a/guillet22a.pdf}, url = {https://proceedings.mlr.press/v199/guillet22a.html}, abstract = {Learning a good state representation is a critical skill when dealing with multiple tasks in Reinforcement Learning as it allows for transfer and better generalization between tasks. However, defining what constitute a useful representation is far from simple and there is so far no standard method to find such an encoding. In this paper, we argue that distillation — a process that aims at imitating a set of given policies with a single neural network — can be used to learn a state representation displaying favorable characteristics. In this regard, we define three criteria that measure desirable features of a state encoding: the ability to select important variables in the input space, the ability to efficiently separate states according to their corresponding optimal action, and the robustness of the state encoding on new tasks. We first evaluate these criteria and verify the contribution of distillation on state representation on a toy environment based on the standard inverted pendulum problem, before extending our analysis on more complex visual tasks from the Atari and Procgen benchmarks.} }
Endnote
%0 Conference Paper %T Neural Distillation as a State Representation Bottleneck in Reinforcement Learning %A Valentin Guillet %A Dennis George Wilson %A Emmanuel Rachelson %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-guillet22a %I PMLR %P 798--818 %U https://proceedings.mlr.press/v199/guillet22a.html %V 199 %X Learning a good state representation is a critical skill when dealing with multiple tasks in Reinforcement Learning as it allows for transfer and better generalization between tasks. However, defining what constitute a useful representation is far from simple and there is so far no standard method to find such an encoding. In this paper, we argue that distillation — a process that aims at imitating a set of given policies with a single neural network — can be used to learn a state representation displaying favorable characteristics. In this regard, we define three criteria that measure desirable features of a state encoding: the ability to select important variables in the input space, the ability to efficiently separate states according to their corresponding optimal action, and the robustness of the state encoding on new tasks. We first evaluate these criteria and verify the contribution of distillation on state representation on a toy environment based on the standard inverted pendulum problem, before extending our analysis on more complex visual tasks from the Atari and Procgen benchmarks.
APA
Guillet, V., Wilson, D.G. & Rachelson, E.. (2022). Neural Distillation as a State Representation Bottleneck in Reinforcement Learning. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:798-818 Available from https://proceedings.mlr.press/v199/guillet22a.html.

Related Material