Scaling Laws for Pre-training Agents and World Models

Tim Pearce, Tabish Rashid, David Bignell, Raluca Georgescu, Sam Devlin, Katja Hofmann
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:48542-48562, 2025.

Abstract

The performance of embodied agents has been shown to improve by increasing model parameters, dataset size, and compute. This has been demonstrated in domains from robotics to video games, when generative learning objectives on offline datasets (pre-training) are used to model an agent’s behavior (imitation learning) or their environment (world modeling). This paper characterizes the role of scale in these tasks more precisely. Going beyond the simple intuition that ‘bigger is better’, we show that the same types of power laws found in language modeling also arise in world modeling and imitation learning (e.g. between loss and optimal model size). However, the coefficients of these laws are heavily influenced by the tokenizer, task & architecture – this has important implications on the optimal sizing of models and data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-pearce25a, title = {Scaling Laws for Pre-training Agents and World Models}, author = {Pearce, Tim and Rashid, Tabish and Bignell, David and Georgescu, Raluca and Devlin, Sam and Hofmann, Katja}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {48542--48562}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/pearce25a/pearce25a.pdf}, url = {https://proceedings.mlr.press/v267/pearce25a.html}, abstract = {The performance of embodied agents has been shown to improve by increasing model parameters, dataset size, and compute. This has been demonstrated in domains from robotics to video games, when generative learning objectives on offline datasets (pre-training) are used to model an agent’s behavior (imitation learning) or their environment (world modeling). This paper characterizes the role of scale in these tasks more precisely. Going beyond the simple intuition that ‘bigger is better’, we show that the same types of power laws found in language modeling also arise in world modeling and imitation learning (e.g. between loss and optimal model size). However, the coefficients of these laws are heavily influenced by the tokenizer, task & architecture – this has important implications on the optimal sizing of models and data.} }
Endnote
%0 Conference Paper %T Scaling Laws for Pre-training Agents and World Models %A Tim Pearce %A Tabish Rashid %A David Bignell %A Raluca Georgescu %A Sam Devlin %A Katja Hofmann %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-pearce25a %I PMLR %P 48542--48562 %U https://proceedings.mlr.press/v267/pearce25a.html %V 267 %X The performance of embodied agents has been shown to improve by increasing model parameters, dataset size, and compute. This has been demonstrated in domains from robotics to video games, when generative learning objectives on offline datasets (pre-training) are used to model an agent’s behavior (imitation learning) or their environment (world modeling). This paper characterizes the role of scale in these tasks more precisely. Going beyond the simple intuition that ‘bigger is better’, we show that the same types of power laws found in language modeling also arise in world modeling and imitation learning (e.g. between loss and optimal model size). However, the coefficients of these laws are heavily influenced by the tokenizer, task & architecture – this has important implications on the optimal sizing of models and data.
APA
Pearce, T., Rashid, T., Bignell, D., Georgescu, R., Devlin, S. & Hofmann, K.. (2025). Scaling Laws for Pre-training Agents and World Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:48542-48562 Available from https://proceedings.mlr.press/v267/pearce25a.html.

Related Material