Zipfian Environments for Reinforcement Learning

Stephanie C.Y. Chan, Andrew Kyle Lampinen, Pierre Harvey Richemond, Felix Hill
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:406-429, 2022.

Abstract

As humans and animals learn in the natural world, they encounter distributions of entities, situations and events that are far from uniform. Typically, a relatively small set of experiences are encountered frequently, while many important experiences occur only rarely. The highly-skewed, heavy-tailed nature of reality poses particular learning challenges that humans and animals have met by evolving specialised memory systems. By contrast, most popular RL environments and benchmarks involve approximately uniform variation of properties, objects, situations or tasks. How will RL algorithms perform in worlds (like ours) where the distribution of environment features is far less uniform? To explore this question, we develop three complementary RL environments where the agent’s experience varies according to a Zipfian (discrete power law) distribution. These environments will be made available as an open source library. On these benchmarks, we find that standard Deep RL architectures and algorithms acquire useful knowledge of common situations and tasks, but fail to adequately learn about rarer ones. To understand this failure better, we explore how different aspects of current approaches may be adjusted to help improve performance on rare events, and show that the RL objective function, the agent’s memory system and self-supervised learning objectives can all influence an agent’s ability to learn from uncommon experiences. Together, these results show that learning robustly from skewed experience is a critical challenge for applying Deep RL methods beyond simulations or laboratories, and our Zipfian environments provide a basis for measuring future progress towards this goal.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-chan22a, title = {Zipfian Environments for Reinforcement Learning}, author = {Chan, Stephanie C.Y. and Lampinen, Andrew Kyle and Richemond, Pierre Harvey and Hill, Felix}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {406--429}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/chan22a/chan22a.pdf}, url = {https://proceedings.mlr.press/v199/chan22a.html}, abstract = {As humans and animals learn in the natural world, they encounter distributions of entities, situations and events that are far from uniform. Typically, a relatively small set of experiences are encountered frequently, while many important experiences occur only rarely. The highly-skewed, heavy-tailed nature of reality poses particular learning challenges that humans and animals have met by evolving specialised memory systems. By contrast, most popular RL environments and benchmarks involve approximately uniform variation of properties, objects, situations or tasks. How will RL algorithms perform in worlds (like ours) where the distribution of environment features is far less uniform? To explore this question, we develop three complementary RL environments where the agent’s experience varies according to a Zipfian (discrete power law) distribution. These environments will be made available as an open source library. On these benchmarks, we find that standard Deep RL architectures and algorithms acquire useful knowledge of common situations and tasks, but fail to adequately learn about rarer ones. To understand this failure better, we explore how different aspects of current approaches may be adjusted to help improve performance on rare events, and show that the RL objective function, the agent’s memory system and self-supervised learning objectives can all influence an agent’s ability to learn from uncommon experiences. Together, these results show that learning robustly from skewed experience is a critical challenge for applying Deep RL methods beyond simulations or laboratories, and our Zipfian environments provide a basis for measuring future progress towards this goal.} }
Endnote
%0 Conference Paper %T Zipfian Environments for Reinforcement Learning %A Stephanie C.Y. Chan %A Andrew Kyle Lampinen %A Pierre Harvey Richemond %A Felix Hill %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-chan22a %I PMLR %P 406--429 %U https://proceedings.mlr.press/v199/chan22a.html %V 199 %X As humans and animals learn in the natural world, they encounter distributions of entities, situations and events that are far from uniform. Typically, a relatively small set of experiences are encountered frequently, while many important experiences occur only rarely. The highly-skewed, heavy-tailed nature of reality poses particular learning challenges that humans and animals have met by evolving specialised memory systems. By contrast, most popular RL environments and benchmarks involve approximately uniform variation of properties, objects, situations or tasks. How will RL algorithms perform in worlds (like ours) where the distribution of environment features is far less uniform? To explore this question, we develop three complementary RL environments where the agent’s experience varies according to a Zipfian (discrete power law) distribution. These environments will be made available as an open source library. On these benchmarks, we find that standard Deep RL architectures and algorithms acquire useful knowledge of common situations and tasks, but fail to adequately learn about rarer ones. To understand this failure better, we explore how different aspects of current approaches may be adjusted to help improve performance on rare events, and show that the RL objective function, the agent’s memory system and self-supervised learning objectives can all influence an agent’s ability to learn from uncommon experiences. Together, these results show that learning robustly from skewed experience is a critical challenge for applying Deep RL methods beyond simulations or laboratories, and our Zipfian environments provide a basis for measuring future progress towards this goal.
APA
Chan, S.C., Lampinen, A.K., Richemond, P.H. & Hill, F.. (2022). Zipfian Environments for Reinforcement Learning. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:406-429 Available from https://proceedings.mlr.press/v199/chan22a.html.

Related Material