SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?

Samuel Miserendino, Michele Wang, Tejal Patwardhan, Johannes Heidecke
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:44412-44450, 2025.

Abstract

We introduce SWE-Lancer, a benchmark of over 1400 freelance software engineering tasks from Upwork, valued at $1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks — ranging from $50 bug fixes to $32000 feature implementations — and managerial tasks, where models choose between technical implementation proposals. Independent tasks are graded with end-to-end tests triple-verified by experienced software engineers, while managerial decisions are assessed against the choices of the original hired engineering managers. We evaluate model performance and find that frontier models are still unable to solve the majority of tasks. To facilitate future research, we open-source a unified Docker image and a public evaluation split. By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-miserendino25a, title = {{SWE}-Lancer: Can Frontier {LLM}s Earn $1 Million from Real-World Freelance Software Engineering?}, author = {Miserendino, Samuel and Wang, Michele and Patwardhan, Tejal and Heidecke, Johannes}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {44412--44450}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/miserendino25a/miserendino25a.pdf}, url = {https://proceedings.mlr.press/v267/miserendino25a.html}, abstract = {We introduce SWE-Lancer, a benchmark of over 1400 freelance software engineering tasks from Upwork, valued at $1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks — ranging from $50 bug fixes to $32000 feature implementations — and managerial tasks, where models choose between technical implementation proposals. Independent tasks are graded with end-to-end tests triple-verified by experienced software engineers, while managerial decisions are assessed against the choices of the original hired engineering managers. We evaluate model performance and find that frontier models are still unable to solve the majority of tasks. To facilitate future research, we open-source a unified Docker image and a public evaluation split. By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.} }
Endnote
%0 Conference Paper %T SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering? %A Samuel Miserendino %A Michele Wang %A Tejal Patwardhan %A Johannes Heidecke %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-miserendino25a %I PMLR %P 44412--44450 %U https://proceedings.mlr.press/v267/miserendino25a.html %V 267 %X We introduce SWE-Lancer, a benchmark of over 1400 freelance software engineering tasks from Upwork, valued at $1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks — ranging from $50 bug fixes to $32000 feature implementations — and managerial tasks, where models choose between technical implementation proposals. Independent tasks are graded with end-to-end tests triple-verified by experienced software engineers, while managerial decisions are assessed against the choices of the original hired engineering managers. We evaluate model performance and find that frontier models are still unable to solve the majority of tasks. To facilitate future research, we open-source a unified Docker image and a public evaluation split. By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.
APA
Miserendino, S., Wang, M., Patwardhan, T. & Heidecke, J.. (2025). SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:44412-44450 Available from https://proceedings.mlr.press/v267/miserendino25a.html.

Related Material