Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks

Hojoon Lee, Hyeonseo Cho, Hyunseung Kim, Donghu Kim, Dugki Min, Jaegul Choo, Clare Lyle
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:26416-26438, 2024.

Abstract

This study investigates the loss of generalization ability in neural networks, revisiting warm-starting experiments from Ash & Adams. Our empirical analysis reveals that common methods designed to enhance plasticity by maintaining trainability provide limited benefits to generalization. While reinitializing the network can be effective, it also risks losing valuable prior knowledge. To this end, we introduce the Hare & Tortoise, inspired by the brain’s complementary learning system. Hare & Tortoise consists of two components: the Hare network, which rapidly adapts to new information analogously to the hippocampus, and the Tortoise network, which gradually integrates knowledge akin to the neocortex. By periodically reinitializing the Hare network to the Tortoise’s weights, our method preserves plasticity while retaining general knowledge. Hare & Tortoise can effectively maintain the network’s ability to generalize, which improves advanced reinforcement learning algorithms on the Atari-100k benchmark. The code is available at https://github.com/dojeon-ai/hare-tortoise.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-lee24d, title = {Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks}, author = {Lee, Hojoon and Cho, Hyeonseo and Kim, Hyunseung and Kim, Donghu and Min, Dugki and Choo, Jaegul and Lyle, Clare}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {26416--26438}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24d/lee24d.pdf}, url = {https://proceedings.mlr.press/v235/lee24d.html}, abstract = {This study investigates the loss of generalization ability in neural networks, revisiting warm-starting experiments from Ash & Adams. Our empirical analysis reveals that common methods designed to enhance plasticity by maintaining trainability provide limited benefits to generalization. While reinitializing the network can be effective, it also risks losing valuable prior knowledge. To this end, we introduce the Hare & Tortoise, inspired by the brain’s complementary learning system. Hare & Tortoise consists of two components: the Hare network, which rapidly adapts to new information analogously to the hippocampus, and the Tortoise network, which gradually integrates knowledge akin to the neocortex. By periodically reinitializing the Hare network to the Tortoise’s weights, our method preserves plasticity while retaining general knowledge. Hare & Tortoise can effectively maintain the network’s ability to generalize, which improves advanced reinforcement learning algorithms on the Atari-100k benchmark. The code is available at https://github.com/dojeon-ai/hare-tortoise.} }
Endnote
%0 Conference Paper %T Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks %A Hojoon Lee %A Hyeonseo Cho %A Hyunseung Kim %A Donghu Kim %A Dugki Min %A Jaegul Choo %A Clare Lyle %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-lee24d %I PMLR %P 26416--26438 %U https://proceedings.mlr.press/v235/lee24d.html %V 235 %X This study investigates the loss of generalization ability in neural networks, revisiting warm-starting experiments from Ash & Adams. Our empirical analysis reveals that common methods designed to enhance plasticity by maintaining trainability provide limited benefits to generalization. While reinitializing the network can be effective, it also risks losing valuable prior knowledge. To this end, we introduce the Hare & Tortoise, inspired by the brain’s complementary learning system. Hare & Tortoise consists of two components: the Hare network, which rapidly adapts to new information analogously to the hippocampus, and the Tortoise network, which gradually integrates knowledge akin to the neocortex. By periodically reinitializing the Hare network to the Tortoise’s weights, our method preserves plasticity while retaining general knowledge. Hare & Tortoise can effectively maintain the network’s ability to generalize, which improves advanced reinforcement learning algorithms on the Atari-100k benchmark. The code is available at https://github.com/dojeon-ai/hare-tortoise.
APA
Lee, H., Cho, H., Kim, H., Kim, D., Min, D., Choo, J. & Lyle, C.. (2024). Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:26416-26438 Available from https://proceedings.mlr.press/v235/lee24d.html.

Related Material