Sparsity-aware generalization theory for deep neural networks

Ramchandran Muthukumar, Jeremias Sulam
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5311-5342, 2023.

Abstract

Deep artificial neural networks achieve surprising generalization abilities that remain poorly understood. In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations. By developing a framework that accounts for this reduced effective model size for each input sample, we are able to show fundamental trade-offs between sparsity and generalization. Importantly, our results make no strong assumptions about the degree of sparsity achieved by the model, and it improves over recent norm-based approaches. We illustrate our results numerically, demonstrating non-vacuous bounds when coupled with data-dependent priors even in over-parametrized settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-muthukumar23a, title = {Sparsity-aware generalization theory for deep neural networks}, author = {Muthukumar, Ramchandran and Sulam, Jeremias}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {5311--5342}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/muthukumar23a/muthukumar23a.pdf}, url = {https://proceedings.mlr.press/v195/muthukumar23a.html}, abstract = {Deep artificial neural networks achieve surprising generalization abilities that remain poorly understood. In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations. By developing a framework that accounts for this reduced effective model size for each input sample, we are able to show fundamental trade-offs between sparsity and generalization. Importantly, our results make no strong assumptions about the degree of sparsity achieved by the model, and it improves over recent norm-based approaches. We illustrate our results numerically, demonstrating non-vacuous bounds when coupled with data-dependent priors even in over-parametrized settings.} }
Endnote
%0 Conference Paper %T Sparsity-aware generalization theory for deep neural networks %A Ramchandran Muthukumar %A Jeremias Sulam %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-muthukumar23a %I PMLR %P 5311--5342 %U https://proceedings.mlr.press/v195/muthukumar23a.html %V 195 %X Deep artificial neural networks achieve surprising generalization abilities that remain poorly understood. In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations. By developing a framework that accounts for this reduced effective model size for each input sample, we are able to show fundamental trade-offs between sparsity and generalization. Importantly, our results make no strong assumptions about the degree of sparsity achieved by the model, and it improves over recent norm-based approaches. We illustrate our results numerically, demonstrating non-vacuous bounds when coupled with data-dependent priors even in over-parametrized settings.
APA
Muthukumar, R. & Sulam, J.. (2023). Sparsity-aware generalization theory for deep neural networks. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:5311-5342 Available from https://proceedings.mlr.press/v195/muthukumar23a.html.

Related Material