More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize

Alexander Wei, Wei Hu, Jacob Steinhardt
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:23549-23588, 2022.

Abstract

Of theories for why large-scale machine learning models generalize despite being vastly overparameterized, which of their assumptions are needed to capture the qualitative phenomena of generalization in the real world? On one hand, we find that most theoretical analyses fall short of capturing these qualitative phenomena even for kernel regression, when applied to kernels derived from large-scale neural networks (e.g., ResNet-50) and real data (e.g., CIFAR-100). On the other hand, we find that the classical GCV estimator (Craven and Wahba, 1978) accurately predicts generalization risk even in such overparameterized settings. To bolster this empirical finding, we prove that the GCV estimator converges to the generalization risk whenever a local random matrix law holds. Finally, we apply this random matrix theory lens to explain why pretrained representations generalize better as well as what factors govern scaling laws for kernel regression. Our findings suggest that random matrix theory, rather than just being a toy model, may be central to understanding the properties of neural representations in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-wei22a, title = {More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize}, author = {Wei, Alexander and Hu, Wei and Steinhardt, Jacob}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {23549--23588}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/wei22a/wei22a.pdf}, url = {https://proceedings.mlr.press/v162/wei22a.html}, abstract = {Of theories for why large-scale machine learning models generalize despite being vastly overparameterized, which of their assumptions are needed to capture the qualitative phenomena of generalization in the real world? On one hand, we find that most theoretical analyses fall short of capturing these qualitative phenomena even for kernel regression, when applied to kernels derived from large-scale neural networks (e.g., ResNet-50) and real data (e.g., CIFAR-100). On the other hand, we find that the classical GCV estimator (Craven and Wahba, 1978) accurately predicts generalization risk even in such overparameterized settings. To bolster this empirical finding, we prove that the GCV estimator converges to the generalization risk whenever a local random matrix law holds. Finally, we apply this random matrix theory lens to explain why pretrained representations generalize better as well as what factors govern scaling laws for kernel regression. Our findings suggest that random matrix theory, rather than just being a toy model, may be central to understanding the properties of neural representations in practice.} }
Endnote
%0 Conference Paper %T More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize %A Alexander Wei %A Wei Hu %A Jacob Steinhardt %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-wei22a %I PMLR %P 23549--23588 %U https://proceedings.mlr.press/v162/wei22a.html %V 162 %X Of theories for why large-scale machine learning models generalize despite being vastly overparameterized, which of their assumptions are needed to capture the qualitative phenomena of generalization in the real world? On one hand, we find that most theoretical analyses fall short of capturing these qualitative phenomena even for kernel regression, when applied to kernels derived from large-scale neural networks (e.g., ResNet-50) and real data (e.g., CIFAR-100). On the other hand, we find that the classical GCV estimator (Craven and Wahba, 1978) accurately predicts generalization risk even in such overparameterized settings. To bolster this empirical finding, we prove that the GCV estimator converges to the generalization risk whenever a local random matrix law holds. Finally, we apply this random matrix theory lens to explain why pretrained representations generalize better as well as what factors govern scaling laws for kernel regression. Our findings suggest that random matrix theory, rather than just being a toy model, may be central to understanding the properties of neural representations in practice.
APA
Wei, A., Hu, W. & Steinhardt, J.. (2022). More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:23549-23588 Available from https://proceedings.mlr.press/v162/wei22a.html.

Related Material