The Dormant Neuron Phenomenon in Deep Reinforcement Learning

Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, Utku Evci
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:32145-32168, 2023.

Abstract

In this work we identify the dormant neuron phenomenon in deep reinforcement learning, where an agent’s network suffers from an increasing number of inactive neurons, thereby affecting network expressivity. We demonstrate the presence of this phenomenon across a variety of algorithms and environments, and highlight its effect on learning. To address this issue, we propose a simple and effective method (ReDo) that Recycles Dormant neurons throughout training. Our experiments demonstrate that ReDo maintains the expressive power of networks by reducing the number of dormant neurons and results in improved performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-sokar23a, title = {The Dormant Neuron Phenomenon in Deep Reinforcement Learning}, author = {Sokar, Ghada and Agarwal, Rishabh and Castro, Pablo Samuel and Evci, Utku}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {32145--32168}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/sokar23a/sokar23a.pdf}, url = {https://proceedings.mlr.press/v202/sokar23a.html}, abstract = {In this work we identify the dormant neuron phenomenon in deep reinforcement learning, where an agent’s network suffers from an increasing number of inactive neurons, thereby affecting network expressivity. We demonstrate the presence of this phenomenon across a variety of algorithms and environments, and highlight its effect on learning. To address this issue, we propose a simple and effective method (ReDo) that Recycles Dormant neurons throughout training. Our experiments demonstrate that ReDo maintains the expressive power of networks by reducing the number of dormant neurons and results in improved performance.} }
Endnote
%0 Conference Paper %T The Dormant Neuron Phenomenon in Deep Reinforcement Learning %A Ghada Sokar %A Rishabh Agarwal %A Pablo Samuel Castro %A Utku Evci %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-sokar23a %I PMLR %P 32145--32168 %U https://proceedings.mlr.press/v202/sokar23a.html %V 202 %X In this work we identify the dormant neuron phenomenon in deep reinforcement learning, where an agent’s network suffers from an increasing number of inactive neurons, thereby affecting network expressivity. We demonstrate the presence of this phenomenon across a variety of algorithms and environments, and highlight its effect on learning. To address this issue, we propose a simple and effective method (ReDo) that Recycles Dormant neurons throughout training. Our experiments demonstrate that ReDo maintains the expressive power of networks by reducing the number of dormant neurons and results in improved performance.
APA
Sokar, G., Agarwal, R., Castro, P.S. & Evci, U.. (2023). The Dormant Neuron Phenomenon in Deep Reinforcement Learning. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:32145-32168 Available from https://proceedings.mlr.press/v202/sokar23a.html.

Related Material