Wide Neural Networks Forget Less Catastrophically

Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Huiyi Hu, Razvan Pascanu, Dilan Gorur, Mehrdad Farajtabar
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:15699-15717, 2022.

Abstract

A primary focus area in continual learning research is alleviating the "catastrophic forgetting" problem in neural networks by designing new algorithms that are more robust to the distribution shifts. While the recent progress in continual learning literature is encouraging, our understanding of what properties of neural networks contribute to catastrophic forgetting is still limited. To address this, instead of focusing on continual learning algorithms, in this work, we focus on the model itself and study the impact of "width" of the neural network architecture on catastrophic forgetting, and show that width has a surprisingly significant effect on forgetting. To explain this effect, we study the learning dynamics of the network from various perspectives such as gradient orthogonality, sparsity, and lazy training regime. We provide potential explanations that are consistent with the empirical results across different architectures and continual learning benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-mirzadeh22a, title = {Wide Neural Networks Forget Less Catastrophically}, author = {Mirzadeh, Seyed Iman and Chaudhry, Arslan and Yin, Dong and Hu, Huiyi and Pascanu, Razvan and Gorur, Dilan and Farajtabar, Mehrdad}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {15699--15717}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/mirzadeh22a/mirzadeh22a.pdf}, url = {https://proceedings.mlr.press/v162/mirzadeh22a.html}, abstract = {A primary focus area in continual learning research is alleviating the "catastrophic forgetting" problem in neural networks by designing new algorithms that are more robust to the distribution shifts. While the recent progress in continual learning literature is encouraging, our understanding of what properties of neural networks contribute to catastrophic forgetting is still limited. To address this, instead of focusing on continual learning algorithms, in this work, we focus on the model itself and study the impact of "width" of the neural network architecture on catastrophic forgetting, and show that width has a surprisingly significant effect on forgetting. To explain this effect, we study the learning dynamics of the network from various perspectives such as gradient orthogonality, sparsity, and lazy training regime. We provide potential explanations that are consistent with the empirical results across different architectures and continual learning benchmarks.} }
Endnote
%0 Conference Paper %T Wide Neural Networks Forget Less Catastrophically %A Seyed Iman Mirzadeh %A Arslan Chaudhry %A Dong Yin %A Huiyi Hu %A Razvan Pascanu %A Dilan Gorur %A Mehrdad Farajtabar %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-mirzadeh22a %I PMLR %P 15699--15717 %U https://proceedings.mlr.press/v162/mirzadeh22a.html %V 162 %X A primary focus area in continual learning research is alleviating the "catastrophic forgetting" problem in neural networks by designing new algorithms that are more robust to the distribution shifts. While the recent progress in continual learning literature is encouraging, our understanding of what properties of neural networks contribute to catastrophic forgetting is still limited. To address this, instead of focusing on continual learning algorithms, in this work, we focus on the model itself and study the impact of "width" of the neural network architecture on catastrophic forgetting, and show that width has a surprisingly significant effect on forgetting. To explain this effect, we study the learning dynamics of the network from various perspectives such as gradient orthogonality, sparsity, and lazy training regime. We provide potential explanations that are consistent with the empirical results across different architectures and continual learning benchmarks.
APA
Mirzadeh, S.I., Chaudhry, A., Yin, D., Hu, H., Pascanu, R., Gorur, D. & Farajtabar, M.. (2022). Wide Neural Networks Forget Less Catastrophically. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:15699-15717 Available from https://proceedings.mlr.press/v162/mirzadeh22a.html.

Related Material