Regress, Don’t Guess: A Regression-like Loss on Number Tokens for Language Models

Jonas Zausinger, Lars Pennig, Anamarija Kozina, Sean Sdahl, Julian Sikora, Adrian Dendorfer, Timofey Kuznetsov, Mohamad Hagog, Nina Wiedemann, Kacper Chlodny, Vincent Limbach, Anna Ketteler, Thorben Prein, Vishwa Mohan Singh, Michael Danziger, Jannis Born
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:73995-74017, 2025.

Abstract

While language models have exceptional capabilities at text generation, they lack a natural inductive bias for emitting numbers and thus struggle in tasks involving quantitative reasoning, especially arithmetic. One fundamental limitation is the nature of the cross-entropy (CE) loss, which assumes a nominal scale and thus cannot convey proximity between generated number tokens. In response, we here present a regression-like loss that operates purely on token level. Our proposed Number Token Loss (NTL) comes in two flavors and minimizes either the $\mathcal{L}_p$ norm or the Wasserstein distance between the numerical values of the real and predicted number tokens. NTL can easily be added to any language model and extend the CE objective during training without runtime overhead. We evaluate the proposed scheme on various mathematical datasets and find that it consistently improves performance in math-related tasks. In a direct comparison on a regression task, we find that NTL can match the performance of a regression head, despite operating on token level. Finally, we scale NTL up to 3B parameter models and observe improved performance, demonstrating its potential for seamless integration into LLMs. We hope to inspire LLM developers to improve their pretraining objectives and distribute NTL as a minimalistic and lightweight PyPI package ntloss: https://ibm.biz/ntl-pypi-repo. Development code for full paper reproduction is available separately.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zausinger25a, title = {Regress, Don’t Guess: A Regression-like Loss on Number Tokens for Language Models}, author = {Zausinger, Jonas and Pennig, Lars and Kozina, Anamarija and Sdahl, Sean and Sikora, Julian and Dendorfer, Adrian and Kuznetsov, Timofey and Hagog, Mohamad and Wiedemann, Nina and Chlodny, Kacper and Limbach, Vincent and Ketteler, Anna and Prein, Thorben and Singh, Vishwa Mohan and Danziger, Michael and Born, Jannis}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {73995--74017}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zausinger25a/zausinger25a.pdf}, url = {https://proceedings.mlr.press/v267/zausinger25a.html}, abstract = {While language models have exceptional capabilities at text generation, they lack a natural inductive bias for emitting numbers and thus struggle in tasks involving quantitative reasoning, especially arithmetic. One fundamental limitation is the nature of the cross-entropy (CE) loss, which assumes a nominal scale and thus cannot convey proximity between generated number tokens. In response, we here present a regression-like loss that operates purely on token level. Our proposed Number Token Loss (NTL) comes in two flavors and minimizes either the $\mathcal{L}_p$ norm or the Wasserstein distance between the numerical values of the real and predicted number tokens. NTL can easily be added to any language model and extend the CE objective during training without runtime overhead. We evaluate the proposed scheme on various mathematical datasets and find that it consistently improves performance in math-related tasks. In a direct comparison on a regression task, we find that NTL can match the performance of a regression head, despite operating on token level. Finally, we scale NTL up to 3B parameter models and observe improved performance, demonstrating its potential for seamless integration into LLMs. We hope to inspire LLM developers to improve their pretraining objectives and distribute NTL as a minimalistic and lightweight PyPI package ntloss: https://ibm.biz/ntl-pypi-repo. Development code for full paper reproduction is available separately.} }
Endnote
%0 Conference Paper %T Regress, Don’t Guess: A Regression-like Loss on Number Tokens for Language Models %A Jonas Zausinger %A Lars Pennig %A Anamarija Kozina %A Sean Sdahl %A Julian Sikora %A Adrian Dendorfer %A Timofey Kuznetsov %A Mohamad Hagog %A Nina Wiedemann %A Kacper Chlodny %A Vincent Limbach %A Anna Ketteler %A Thorben Prein %A Vishwa Mohan Singh %A Michael Danziger %A Jannis Born %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zausinger25a %I PMLR %P 73995--74017 %U https://proceedings.mlr.press/v267/zausinger25a.html %V 267 %X While language models have exceptional capabilities at text generation, they lack a natural inductive bias for emitting numbers and thus struggle in tasks involving quantitative reasoning, especially arithmetic. One fundamental limitation is the nature of the cross-entropy (CE) loss, which assumes a nominal scale and thus cannot convey proximity between generated number tokens. In response, we here present a regression-like loss that operates purely on token level. Our proposed Number Token Loss (NTL) comes in two flavors and minimizes either the $\mathcal{L}_p$ norm or the Wasserstein distance between the numerical values of the real and predicted number tokens. NTL can easily be added to any language model and extend the CE objective during training without runtime overhead. We evaluate the proposed scheme on various mathematical datasets and find that it consistently improves performance in math-related tasks. In a direct comparison on a regression task, we find that NTL can match the performance of a regression head, despite operating on token level. Finally, we scale NTL up to 3B parameter models and observe improved performance, demonstrating its potential for seamless integration into LLMs. We hope to inspire LLM developers to improve their pretraining objectives and distribute NTL as a minimalistic and lightweight PyPI package ntloss: https://ibm.biz/ntl-pypi-repo. Development code for full paper reproduction is available separately.
APA
Zausinger, J., Pennig, L., Kozina, A., Sdahl, S., Sikora, J., Dendorfer, A., Kuznetsov, T., Hagog, M., Wiedemann, N., Chlodny, K., Limbach, V., Ketteler, A., Prein, T., Singh, V.M., Danziger, M. & Born, J.. (2025). Regress, Don’t Guess: A Regression-like Loss on Number Tokens for Language Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:73995-74017 Available from https://proceedings.mlr.press/v267/zausinger25a.html.

Related Material