Offline Training of Language Model Agents with Functions as Learnable Weights

Shaokun Zhang, Jieyu Zhang, Jiale Liu, Linxin Song, Chi Wang, Ranjay Krishna, Qingyun Wu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:60315-60335, 2024.

Abstract

Researchers and practitioners have recently reframed powerful Large Language Models (LLMs) as agents, enabling them to automate complex tasks largely via the use of specialized functions. To facilitate the development of LLM agents, we present a novel paradigm of training LLM agents without modifying the LLM weights, which is particularly useful when the LLMs are difficult or inaccessible for modifications. Inspired by how humans continuously forge tools to adapt to real-world tasks, rather than change our biological structure to fit a static set of tools, we propose to progressively forge agent’s functions to better solve the downstream tasks instead of modifying the LLM weights. By treating the functions as learnable ‘agent parameters’ and leveraging the fundamental idea of model training in artificial intelligence, we develop AgentOptimizer that employs the LLM to update agents’ functions and devise an agent training algorithm with two strategies, roll-back, and early-stop, to streamline the training process. With extensive experiments, we showcase that the agent training paradigm could significantly improve the performance of representative LLM agents in various downstream tasks. We also study the behavior of the agent training regarding aspects like the learning curve and domain transferability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhang24cd, title = {Offline Training of Language Model Agents with Functions as Learnable Weights}, author = {Zhang, Shaokun and Zhang, Jieyu and Liu, Jiale and Song, Linxin and Wang, Chi and Krishna, Ranjay and Wu, Qingyun}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {60315--60335}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24cd/zhang24cd.pdf}, url = {https://proceedings.mlr.press/v235/zhang24cd.html}, abstract = {Researchers and practitioners have recently reframed powerful Large Language Models (LLMs) as agents, enabling them to automate complex tasks largely via the use of specialized functions. To facilitate the development of LLM agents, we present a novel paradigm of training LLM agents without modifying the LLM weights, which is particularly useful when the LLMs are difficult or inaccessible for modifications. Inspired by how humans continuously forge tools to adapt to real-world tasks, rather than change our biological structure to fit a static set of tools, we propose to progressively forge agent’s functions to better solve the downstream tasks instead of modifying the LLM weights. By treating the functions as learnable ‘agent parameters’ and leveraging the fundamental idea of model training in artificial intelligence, we develop AgentOptimizer that employs the LLM to update agents’ functions and devise an agent training algorithm with two strategies, roll-back, and early-stop, to streamline the training process. With extensive experiments, we showcase that the agent training paradigm could significantly improve the performance of representative LLM agents in various downstream tasks. We also study the behavior of the agent training regarding aspects like the learning curve and domain transferability.} }
Endnote
%0 Conference Paper %T Offline Training of Language Model Agents with Functions as Learnable Weights %A Shaokun Zhang %A Jieyu Zhang %A Jiale Liu %A Linxin Song %A Chi Wang %A Ranjay Krishna %A Qingyun Wu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhang24cd %I PMLR %P 60315--60335 %U https://proceedings.mlr.press/v235/zhang24cd.html %V 235 %X Researchers and practitioners have recently reframed powerful Large Language Models (LLMs) as agents, enabling them to automate complex tasks largely via the use of specialized functions. To facilitate the development of LLM agents, we present a novel paradigm of training LLM agents without modifying the LLM weights, which is particularly useful when the LLMs are difficult or inaccessible for modifications. Inspired by how humans continuously forge tools to adapt to real-world tasks, rather than change our biological structure to fit a static set of tools, we propose to progressively forge agent’s functions to better solve the downstream tasks instead of modifying the LLM weights. By treating the functions as learnable ‘agent parameters’ and leveraging the fundamental idea of model training in artificial intelligence, we develop AgentOptimizer that employs the LLM to update agents’ functions and devise an agent training algorithm with two strategies, roll-back, and early-stop, to streamline the training process. With extensive experiments, we showcase that the agent training paradigm could significantly improve the performance of representative LLM agents in various downstream tasks. We also study the behavior of the agent training regarding aspects like the learning curve and domain transferability.
APA
Zhang, S., Zhang, J., Liu, J., Song, L., Wang, C., Krishna, R. & Wu, Q.. (2024). Offline Training of Language Model Agents with Functions as Learnable Weights. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:60315-60335 Available from https://proceedings.mlr.press/v235/zhang24cd.html.

Related Material