Can Neural Network Memorization Be Localized?

Pratyush Maini, Michael Curtis Mozer, Hanie Sedghi, Zachary Chase Lipton, J Zico Kolter, Chiyuan Zhang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:23536-23557, 2023.

Abstract

Recent efforts at explaining the interplay of memorization and generalization in deep overparametrized networks have posited that neural networks memorize “hard” examples in the final few layers of the model. Memorization refers to the ability to correctly predict on atypical examples of the training set. In this work, we show that rather than being confined to individual layers, memorization is a phenomenon confined to a small set of neurons in various layers of the model. First, via three experimental sources of converging evidence, we find that most layers are redundant for the memorization of examples and the layers that contribute to example memorization are, in general, not the final layers. The three sources are gradient accounting (measuring the contribution to the gradient norms from memorized and clean examples), layer rewinding (replacing specific model weights of a converged model with previous training checkpoints), and retraining (training rewound layers only on clean examples). Second, we ask a more generic question: can memorization be localized anywhere in a model? We discover that memorization is often confined to a small number of neurons or channels (around 5) of the model. Based on these insights we propose a new form of dropout—example-tied dropout that enables us to direct the memorization of examples to an aprior determined set of neurons. By dropping out these neurons, we are able to reduce the accuracy on memorized examples from 100% to 3%, while also reducing the generalization gap.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-maini23a, title = {Can Neural Network Memorization Be Localized?}, author = {Maini, Pratyush and Mozer, Michael Curtis and Sedghi, Hanie and Lipton, Zachary Chase and Kolter, J Zico and Zhang, Chiyuan}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {23536--23557}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/maini23a/maini23a.pdf}, url = {https://proceedings.mlr.press/v202/maini23a.html}, abstract = {Recent efforts at explaining the interplay of memorization and generalization in deep overparametrized networks have posited that neural networks memorize “hard” examples in the final few layers of the model. Memorization refers to the ability to correctly predict on atypical examples of the training set. In this work, we show that rather than being confined to individual layers, memorization is a phenomenon confined to a small set of neurons in various layers of the model. First, via three experimental sources of converging evidence, we find that most layers are redundant for the memorization of examples and the layers that contribute to example memorization are, in general, not the final layers. The three sources are gradient accounting (measuring the contribution to the gradient norms from memorized and clean examples), layer rewinding (replacing specific model weights of a converged model with previous training checkpoints), and retraining (training rewound layers only on clean examples). Second, we ask a more generic question: can memorization be localized anywhere in a model? We discover that memorization is often confined to a small number of neurons or channels (around 5) of the model. Based on these insights we propose a new form of dropout—example-tied dropout that enables us to direct the memorization of examples to an aprior determined set of neurons. By dropping out these neurons, we are able to reduce the accuracy on memorized examples from 100% to 3%, while also reducing the generalization gap.} }
Endnote
%0 Conference Paper %T Can Neural Network Memorization Be Localized? %A Pratyush Maini %A Michael Curtis Mozer %A Hanie Sedghi %A Zachary Chase Lipton %A J Zico Kolter %A Chiyuan Zhang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-maini23a %I PMLR %P 23536--23557 %U https://proceedings.mlr.press/v202/maini23a.html %V 202 %X Recent efforts at explaining the interplay of memorization and generalization in deep overparametrized networks have posited that neural networks memorize “hard” examples in the final few layers of the model. Memorization refers to the ability to correctly predict on atypical examples of the training set. In this work, we show that rather than being confined to individual layers, memorization is a phenomenon confined to a small set of neurons in various layers of the model. First, via three experimental sources of converging evidence, we find that most layers are redundant for the memorization of examples and the layers that contribute to example memorization are, in general, not the final layers. The three sources are gradient accounting (measuring the contribution to the gradient norms from memorized and clean examples), layer rewinding (replacing specific model weights of a converged model with previous training checkpoints), and retraining (training rewound layers only on clean examples). Second, we ask a more generic question: can memorization be localized anywhere in a model? We discover that memorization is often confined to a small number of neurons or channels (around 5) of the model. Based on these insights we propose a new form of dropout—example-tied dropout that enables us to direct the memorization of examples to an aprior determined set of neurons. By dropping out these neurons, we are able to reduce the accuracy on memorized examples from 100% to 3%, while also reducing the generalization gap.
APA
Maini, P., Mozer, M.C., Sedghi, H., Lipton, Z.C., Kolter, J.Z. & Zhang, C.. (2023). Can Neural Network Memorization Be Localized?. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:23536-23557 Available from https://proceedings.mlr.press/v202/maini23a.html.

Related Material