Non-Vacuous Generalization Bounds for Large Language Models

Sanae Lotfi, Marc Anton Finzi, Yilun Kuang, Tim G. J. Rudner, Micah Goldblum, Andrew Gordon Wilson
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:32801-32818, 2024.

Abstract

Modern language models can contain billions of parameters, raising the question of whether they can generalize beyond the training data or simply parrot their training corpora. We provide the first non-vacuous generalization bounds for pretrained large language models (LLMs), indicating that language models are capable of discovering regularities that generalize to unseen data. In particular, we derive a compression bound that is valid for the unbounded log-likelihood loss using prediction smoothing, and we extend the bound to handle subsampling, making bound computation 900 times faster on massive datasets. To achieve the extreme level of compression required for non-vacuous bounds, we devise SubLoRA, a simple low-dimensional nonlinear parameterization that leads to non-vacuous generalization bounds for very large models with up to 849 million parameters. Finally, we use our bounds to understand LLM generalization and find that larger models have better generalization bounds and are more compressible than smaller models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-lotfi24a, title = {Non-Vacuous Generalization Bounds for Large Language Models}, author = {Lotfi, Sanae and Finzi, Marc Anton and Kuang, Yilun and Rudner, Tim G. J. and Goldblum, Micah and Wilson, Andrew Gordon}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {32801--32818}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/lotfi24a/lotfi24a.pdf}, url = {https://proceedings.mlr.press/v235/lotfi24a.html}, abstract = {Modern language models can contain billions of parameters, raising the question of whether they can generalize beyond the training data or simply parrot their training corpora. We provide the first non-vacuous generalization bounds for pretrained large language models (LLMs), indicating that language models are capable of discovering regularities that generalize to unseen data. In particular, we derive a compression bound that is valid for the unbounded log-likelihood loss using prediction smoothing, and we extend the bound to handle subsampling, making bound computation 900 times faster on massive datasets. To achieve the extreme level of compression required for non-vacuous bounds, we devise SubLoRA, a simple low-dimensional nonlinear parameterization that leads to non-vacuous generalization bounds for very large models with up to 849 million parameters. Finally, we use our bounds to understand LLM generalization and find that larger models have better generalization bounds and are more compressible than smaller models.} }
Endnote
%0 Conference Paper %T Non-Vacuous Generalization Bounds for Large Language Models %A Sanae Lotfi %A Marc Anton Finzi %A Yilun Kuang %A Tim G. J. Rudner %A Micah Goldblum %A Andrew Gordon Wilson %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-lotfi24a %I PMLR %P 32801--32818 %U https://proceedings.mlr.press/v235/lotfi24a.html %V 235 %X Modern language models can contain billions of parameters, raising the question of whether they can generalize beyond the training data or simply parrot their training corpora. We provide the first non-vacuous generalization bounds for pretrained large language models (LLMs), indicating that language models are capable of discovering regularities that generalize to unseen data. In particular, we derive a compression bound that is valid for the unbounded log-likelihood loss using prediction smoothing, and we extend the bound to handle subsampling, making bound computation 900 times faster on massive datasets. To achieve the extreme level of compression required for non-vacuous bounds, we devise SubLoRA, a simple low-dimensional nonlinear parameterization that leads to non-vacuous generalization bounds for very large models with up to 849 million parameters. Finally, we use our bounds to understand LLM generalization and find that larger models have better generalization bounds and are more compressible than smaller models.
APA
Lotfi, S., Finzi, M.A., Kuang, Y., Rudner, T.G.J., Goldblum, M. & Wilson, A.G.. (2024). Non-Vacuous Generalization Bounds for Large Language Models. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:32801-32818 Available from https://proceedings.mlr.press/v235/lotfi24a.html.

Related Material