TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks

Zhiruo Wang, Graham Neubig, Daniel Fried
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:51177-51191, 2024.

Abstract

Language models (LMs) can solve tasks such as answering questions about tables or images by writing programs. However, using primitive functions often leads to verbose and error-prone programs, and higher-level functions require expert design. To enable better solutions without human labor, we ask code LMs to curate reusable high-level functions, and use them to write solutions. We present TROVE, a training-free method of inducing a verifiable and efficient toolbox of functions, by generating via using, growing, and periodically trimming the toolbox. On 11 datasets from math, table question answering, and image reasoning tasks, TROVE consistently yields simpler solutions with higher accuracy than baselines using CodeLLaMa and previous methods using GPT, while using 79-98% smaller toolboxes. TROVE further enables 31% faster and 13% more accurate human verification than baselines. With the same pipeline, it creates diverse functions for varied tasks and datasets, providing insights into their individual characteristics.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-wang24az, title = {{T}ro{VE}: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks}, author = {Wang, Zhiruo and Neubig, Graham and Fried, Daniel}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {51177--51191}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wang24az/wang24az.pdf}, url = {https://proceedings.mlr.press/v235/wang24az.html}, abstract = {Language models (LMs) can solve tasks such as answering questions about tables or images by writing programs. However, using primitive functions often leads to verbose and error-prone programs, and higher-level functions require expert design. To enable better solutions without human labor, we ask code LMs to curate reusable high-level functions, and use them to write solutions. We present TROVE, a training-free method of inducing a verifiable and efficient toolbox of functions, by generating via using, growing, and periodically trimming the toolbox. On 11 datasets from math, table question answering, and image reasoning tasks, TROVE consistently yields simpler solutions with higher accuracy than baselines using CodeLLaMa and previous methods using GPT, while using 79-98% smaller toolboxes. TROVE further enables 31% faster and 13% more accurate human verification than baselines. With the same pipeline, it creates diverse functions for varied tasks and datasets, providing insights into their individual characteristics.} }
Endnote
%0 Conference Paper %T TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks %A Zhiruo Wang %A Graham Neubig %A Daniel Fried %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-wang24az %I PMLR %P 51177--51191 %U https://proceedings.mlr.press/v235/wang24az.html %V 235 %X Language models (LMs) can solve tasks such as answering questions about tables or images by writing programs. However, using primitive functions often leads to verbose and error-prone programs, and higher-level functions require expert design. To enable better solutions without human labor, we ask code LMs to curate reusable high-level functions, and use them to write solutions. We present TROVE, a training-free method of inducing a verifiable and efficient toolbox of functions, by generating via using, growing, and periodically trimming the toolbox. On 11 datasets from math, table question answering, and image reasoning tasks, TROVE consistently yields simpler solutions with higher accuracy than baselines using CodeLLaMa and previous methods using GPT, while using 79-98% smaller toolboxes. TROVE further enables 31% faster and 13% more accurate human verification than baselines. With the same pipeline, it creates diverse functions for varied tasks and datasets, providing insights into their individual characteristics.
APA
Wang, Z., Neubig, G. & Fried, D.. (2024). TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:51177-51191 Available from https://proceedings.mlr.press/v235/wang24az.html.

Related Material