Position: Enough of Scaling LLMs! Lets Focus on Downscaling

Yash Goel, Ayan Sengupta, Tanmoy Chakraborty
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:81375-81392, 2025.

Abstract

We challenge the dominant focus on neural scaling laws and advocate for a paradigm shift toward downscaling in the development of large language models (LLMs). While scaling laws have provided critical insights into performance improvements through increasing model and dataset size, we emphasize the significant limitations of this approach, particularly in terms of computational inefficiency, environmental impact, and deployment constraints. To address these challenges, we propose a holistic framework for downscaling LLMs that seeks to maintain performance while drastically reducing resource demands. This paper outlines practical strategies for transitioning away from traditional scaling paradigms, advocating for a more sustainable, efficient, and accessible approach to LLM development.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-goel25c, title = {Position: Enough of Scaling {LLM}s! {L}ets Focus on Downscaling}, author = {Goel, Yash and Sengupta, Ayan and Chakraborty, Tanmoy}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {81375--81392}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/goel25c/goel25c.pdf}, url = {https://proceedings.mlr.press/v267/goel25c.html}, abstract = {We challenge the dominant focus on neural scaling laws and advocate for a paradigm shift toward downscaling in the development of large language models (LLMs). While scaling laws have provided critical insights into performance improvements through increasing model and dataset size, we emphasize the significant limitations of this approach, particularly in terms of computational inefficiency, environmental impact, and deployment constraints. To address these challenges, we propose a holistic framework for downscaling LLMs that seeks to maintain performance while drastically reducing resource demands. This paper outlines practical strategies for transitioning away from traditional scaling paradigms, advocating for a more sustainable, efficient, and accessible approach to LLM development.} }
Endnote
%0 Conference Paper %T Position: Enough of Scaling LLMs! Lets Focus on Downscaling %A Yash Goel %A Ayan Sengupta %A Tanmoy Chakraborty %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-goel25c %I PMLR %P 81375--81392 %U https://proceedings.mlr.press/v267/goel25c.html %V 267 %X We challenge the dominant focus on neural scaling laws and advocate for a paradigm shift toward downscaling in the development of large language models (LLMs). While scaling laws have provided critical insights into performance improvements through increasing model and dataset size, we emphasize the significant limitations of this approach, particularly in terms of computational inefficiency, environmental impact, and deployment constraints. To address these challenges, we propose a holistic framework for downscaling LLMs that seeks to maintain performance while drastically reducing resource demands. This paper outlines practical strategies for transitioning away from traditional scaling paradigms, advocating for a more sustainable, efficient, and accessible approach to LLM development.
APA
Goel, Y., Sengupta, A. & Chakraborty, T.. (2025). Position: Enough of Scaling LLMs! Lets Focus on Downscaling. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:81375-81392 Available from https://proceedings.mlr.press/v267/goel25c.html.

Related Material