Fleet of Agents: Coordinated Problem Solving with Large Language Models

Lars Henning Klein, Nearchos Potamitis, Roland Aydin, Robert West, Caglar Gulcehre, Akhil Arora
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:30986-31019, 2025.

Abstract

While numerous frameworks have been developed to enhance the reasoning abilities of large language models (LLMs), there is a scarcity of methods that effectively balance the trade-off between cost and quality. In this paper, we introduce Fleet of Agents (FoA), a novel and intuitive yet principled framework utilizing LLMs as agents to navigate through dynamic tree searches, employing a genetic-type particle filtering approach. FoA spawns a multitude of agents, each exploring the search space autonomously, followed by a selection phase where resampling based on a heuristic value function optimizes the balance between exploration and exploitation. This mechanism enables dynamic branching, adapting the exploration strategy based on discovered solutions. We conduct extensive experiments on four benchmark tasks, ‘G̀ame of 24’\’, ‘M̀ini-Crosswords’\’, ‘ẀebShop’\’{and} ‘S̀ciBench’\’, utilizing four different LLMs, GPT-3.5, GPT-4, LLaMA3.2-11B, and LLaMA3.2-90B. On average across all tasks and LLMs, FoA obtains an absolute quality improvement of $\simeq 5%$ while requiring only $\simeq 35%$ of the cost of previous SOTA methods. Notably, our analyses reveal that (1) FoA achieves the best cost-quality trade-off among all benchmarked methods, and (2) FoA+ LLaMA3.2-11B surpasses the Llama3.2-90B model. FoA is publicly available at https://github.com/au-clan/FoA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-klein25a, title = {Fleet of Agents: Coordinated Problem Solving with Large Language Models}, author = {Klein, Lars Henning and Potamitis, Nearchos and Aydin, Roland and West, Robert and Gulcehre, Caglar and Arora, Akhil}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {30986--31019}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/klein25a/klein25a.pdf}, url = {https://proceedings.mlr.press/v267/klein25a.html}, abstract = {While numerous frameworks have been developed to enhance the reasoning abilities of large language models (LLMs), there is a scarcity of methods that effectively balance the trade-off between cost and quality. In this paper, we introduce Fleet of Agents (FoA), a novel and intuitive yet principled framework utilizing LLMs as agents to navigate through dynamic tree searches, employing a genetic-type particle filtering approach. FoA spawns a multitude of agents, each exploring the search space autonomously, followed by a selection phase where resampling based on a heuristic value function optimizes the balance between exploration and exploitation. This mechanism enables dynamic branching, adapting the exploration strategy based on discovered solutions. We conduct extensive experiments on four benchmark tasks, ‘G̀ame of 24’\’, ‘M̀ini-Crosswords’\’, ‘ẀebShop’\’{and} ‘S̀ciBench’\’, utilizing four different LLMs, GPT-3.5, GPT-4, LLaMA3.2-11B, and LLaMA3.2-90B. On average across all tasks and LLMs, FoA obtains an absolute quality improvement of $\simeq 5%$ while requiring only $\simeq 35%$ of the cost of previous SOTA methods. Notably, our analyses reveal that (1) FoA achieves the best cost-quality trade-off among all benchmarked methods, and (2) FoA+ LLaMA3.2-11B surpasses the Llama3.2-90B model. FoA is publicly available at https://github.com/au-clan/FoA.} }
Endnote
%0 Conference Paper %T Fleet of Agents: Coordinated Problem Solving with Large Language Models %A Lars Henning Klein %A Nearchos Potamitis %A Roland Aydin %A Robert West %A Caglar Gulcehre %A Akhil Arora %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-klein25a %I PMLR %P 30986--31019 %U https://proceedings.mlr.press/v267/klein25a.html %V 267 %X While numerous frameworks have been developed to enhance the reasoning abilities of large language models (LLMs), there is a scarcity of methods that effectively balance the trade-off between cost and quality. In this paper, we introduce Fleet of Agents (FoA), a novel and intuitive yet principled framework utilizing LLMs as agents to navigate through dynamic tree searches, employing a genetic-type particle filtering approach. FoA spawns a multitude of agents, each exploring the search space autonomously, followed by a selection phase where resampling based on a heuristic value function optimizes the balance between exploration and exploitation. This mechanism enables dynamic branching, adapting the exploration strategy based on discovered solutions. We conduct extensive experiments on four benchmark tasks, ‘G̀ame of 24’\’, ‘M̀ini-Crosswords’\’, ‘ẀebShop’\’{and} ‘S̀ciBench’\’, utilizing four different LLMs, GPT-3.5, GPT-4, LLaMA3.2-11B, and LLaMA3.2-90B. On average across all tasks and LLMs, FoA obtains an absolute quality improvement of $\simeq 5%$ while requiring only $\simeq 35%$ of the cost of previous SOTA methods. Notably, our analyses reveal that (1) FoA achieves the best cost-quality trade-off among all benchmarked methods, and (2) FoA+ LLaMA3.2-11B surpasses the Llama3.2-90B model. FoA is publicly available at https://github.com/au-clan/FoA.
APA
Klein, L.H., Potamitis, N., Aydin, R., West, R., Gulcehre, C. & Arora, A.. (2025). Fleet of Agents: Coordinated Problem Solving with Large Language Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:30986-31019 Available from https://proceedings.mlr.press/v267/klein25a.html.

Related Material