Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding

Ziyao Wang, Muneeza Azmat, Ang Li, Raya Horesh, Mikhail Yurochkin
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:62345-62356, 2025.

Abstract

Large Language Models (LLMs) often excel in specific domains but fall short in others due to the limitations of their training. Thus, enabling LLMs to solve problems collaboratively by integrating their complementary knowledge promises to improve their performance across domains. To realize this potential, we introduce a novel Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion at test time without requiring additional model training. CoSD employs a draft model to generate initial sequences and an easy-to-learn rule or decision tree to decide when to invoke an assistant model to improve these drafts. CoSD not only enhances knowledge fusion but also improves inference efficiency, is transferable across domains, and offers greater explainability. Experimental results demonstrate that CoSD improves accuracy by up to 10% across benchmarks compared to existing methods, providing a scalable and effective solution for LLM-based applications. Our code has been released at https://github.com/ATP-1010/CoSD.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wang25e, title = {Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding}, author = {Wang, Ziyao and Azmat, Muneeza and Li, Ang and Horesh, Raya and Yurochkin, Mikhail}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {62345--62356}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wang25e/wang25e.pdf}, url = {https://proceedings.mlr.press/v267/wang25e.html}, abstract = {Large Language Models (LLMs) often excel in specific domains but fall short in others due to the limitations of their training. Thus, enabling LLMs to solve problems collaboratively by integrating their complementary knowledge promises to improve their performance across domains. To realize this potential, we introduce a novel Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion at test time without requiring additional model training. CoSD employs a draft model to generate initial sequences and an easy-to-learn rule or decision tree to decide when to invoke an assistant model to improve these drafts. CoSD not only enhances knowledge fusion but also improves inference efficiency, is transferable across domains, and offers greater explainability. Experimental results demonstrate that CoSD improves accuracy by up to 10% across benchmarks compared to existing methods, providing a scalable and effective solution for LLM-based applications. Our code has been released at https://github.com/ATP-1010/CoSD.} }
Endnote
%0 Conference Paper %T Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding %A Ziyao Wang %A Muneeza Azmat %A Ang Li %A Raya Horesh %A Mikhail Yurochkin %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wang25e %I PMLR %P 62345--62356 %U https://proceedings.mlr.press/v267/wang25e.html %V 267 %X Large Language Models (LLMs) often excel in specific domains but fall short in others due to the limitations of their training. Thus, enabling LLMs to solve problems collaboratively by integrating their complementary knowledge promises to improve their performance across domains. To realize this potential, we introduce a novel Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion at test time without requiring additional model training. CoSD employs a draft model to generate initial sequences and an easy-to-learn rule or decision tree to decide when to invoke an assistant model to improve these drafts. CoSD not only enhances knowledge fusion but also improves inference efficiency, is transferable across domains, and offers greater explainability. Experimental results demonstrate that CoSD improves accuracy by up to 10% across benchmarks compared to existing methods, providing a scalable and effective solution for LLM-based applications. Our code has been released at https://github.com/ATP-1010/CoSD.
APA
Wang, Z., Azmat, M., Li, A., Horesh, R. & Yurochkin, M.. (2025). Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:62345-62356 Available from https://proceedings.mlr.press/v267/wang25e.html.

Related Material