Scaling Trends in Language Model Robustness

Nikolaus H. R. Howe, Ian R. Mckenzie, Oskar John Hollinsworth, Michał Zając, Tom Tseng, Aaron David Tucker, Pierre-Luc Bacon, Adam Gleave
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:24080-24138, 2025.

Abstract

Increasing model size has unlocked a dazzling array of capabilities in language models. At the same time, even frontier models remain vulnerable to jailbreaks and prompt injections, despite concerted efforts to make them robust. As both attackers and defenders gain access to more compute, and as models become larger, what will be the effect on robustness? We argue that to answer this question requires a scaling lens, which we adopt in an extensive study of language model robustness across several classification tasks, model families, and adversarial attacks. We find that in the absence of explicit safety training, larger models are not consistently more robust; however, scale improves sample efficiency in adversarial training, though it worsens compute efficiency. Further, we find that increasing attack compute smoothly improves attack success rate against both undefended and adversarially trained models. Finally, after exploring robustness transfer across attacks and threat models, we combine attack and defense scaling rates to study the offense-defense balance. We find that while attack scaling outpaces adversarial training across all models studied, larger adversarially trained models might give defense the advantage in the long run. These results underscore the utility of the scaling lens, and provide a paradigm for evaluating future attacks and defenses on frontier models. Code for this project is available at https://github.com/AlignmentResearch/scaling-llm-robustness-paper.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-howe25a, title = {Scaling Trends in Language Model Robustness}, author = {Howe, Nikolaus H. R. and Mckenzie, Ian R. and Hollinsworth, Oskar John and Zaj\k{a}c, Micha{\l} and Tseng, Tom and Tucker, Aaron David and Bacon, Pierre-Luc and Gleave, Adam}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {24080--24138}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/howe25a/howe25a.pdf}, url = {https://proceedings.mlr.press/v267/howe25a.html}, abstract = {Increasing model size has unlocked a dazzling array of capabilities in language models. At the same time, even frontier models remain vulnerable to jailbreaks and prompt injections, despite concerted efforts to make them robust. As both attackers and defenders gain access to more compute, and as models become larger, what will be the effect on robustness? We argue that to answer this question requires a scaling lens, which we adopt in an extensive study of language model robustness across several classification tasks, model families, and adversarial attacks. We find that in the absence of explicit safety training, larger models are not consistently more robust; however, scale improves sample efficiency in adversarial training, though it worsens compute efficiency. Further, we find that increasing attack compute smoothly improves attack success rate against both undefended and adversarially trained models. Finally, after exploring robustness transfer across attacks and threat models, we combine attack and defense scaling rates to study the offense-defense balance. We find that while attack scaling outpaces adversarial training across all models studied, larger adversarially trained models might give defense the advantage in the long run. These results underscore the utility of the scaling lens, and provide a paradigm for evaluating future attacks and defenses on frontier models. Code for this project is available at https://github.com/AlignmentResearch/scaling-llm-robustness-paper.} }
Endnote
%0 Conference Paper %T Scaling Trends in Language Model Robustness %A Nikolaus H. R. Howe %A Ian R. Mckenzie %A Oskar John Hollinsworth %A Michał Zając %A Tom Tseng %A Aaron David Tucker %A Pierre-Luc Bacon %A Adam Gleave %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-howe25a %I PMLR %P 24080--24138 %U https://proceedings.mlr.press/v267/howe25a.html %V 267 %X Increasing model size has unlocked a dazzling array of capabilities in language models. At the same time, even frontier models remain vulnerable to jailbreaks and prompt injections, despite concerted efforts to make them robust. As both attackers and defenders gain access to more compute, and as models become larger, what will be the effect on robustness? We argue that to answer this question requires a scaling lens, which we adopt in an extensive study of language model robustness across several classification tasks, model families, and adversarial attacks. We find that in the absence of explicit safety training, larger models are not consistently more robust; however, scale improves sample efficiency in adversarial training, though it worsens compute efficiency. Further, we find that increasing attack compute smoothly improves attack success rate against both undefended and adversarially trained models. Finally, after exploring robustness transfer across attacks and threat models, we combine attack and defense scaling rates to study the offense-defense balance. We find that while attack scaling outpaces adversarial training across all models studied, larger adversarially trained models might give defense the advantage in the long run. These results underscore the utility of the scaling lens, and provide a paradigm for evaluating future attacks and defenses on frontier models. Code for this project is available at https://github.com/AlignmentResearch/scaling-llm-robustness-paper.
APA
Howe, N.H.R., Mckenzie, I.R., Hollinsworth, O.J., Zając, M., Tseng, T., Tucker, A.D., Bacon, P. & Gleave, A.. (2025). Scaling Trends in Language Model Robustness. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:24080-24138 Available from https://proceedings.mlr.press/v267/howe25a.html.

Related Material