Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment

Tong Yang, Jincheng Mei, Hanjun Dai, Zixin Wen, Shicong Cen, Dale Schuurmans, Yuejie Chi, Bo Dai
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4537-4545, 2025.

Abstract

Recent advances in aligning large language models with human preferences have corroborated the growing importance of best-of-$N$ distillation (BOND). However, the iterative BOND algorithm is prohibitively expensive in practice due to the sample and computation inefficiency. This paper addresses the problem by revealing a unified game-theoretic connection between iterative BOND and self-play alignment, which unifies seemingly disparate algorithmic paradigms. Based on the connection, we establish a novel framework, \textbf{WIN} rate \textbf{D}ominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization that approximates iterative BOND in the parameter space. We provides provable sample efficiency guarantee for one of the WIND variant with the square loss objective. The experimental results confirm that our algorithm not only accelerates the computation, but also achieves superior sample efficiency compared to existing methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-yang25e, title = {Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment}, author = {Yang, Tong and Mei, Jincheng and Dai, Hanjun and Wen, Zixin and Cen, Shicong and Schuurmans, Dale and Chi, Yuejie and Dai, Bo}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4537--4545}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/yang25e/yang25e.pdf}, url = {https://proceedings.mlr.press/v258/yang25e.html}, abstract = {Recent advances in aligning large language models with human preferences have corroborated the growing importance of best-of-$N$ distillation (BOND). However, the iterative BOND algorithm is prohibitively expensive in practice due to the sample and computation inefficiency. This paper addresses the problem by revealing a unified game-theoretic connection between iterative BOND and self-play alignment, which unifies seemingly disparate algorithmic paradigms. Based on the connection, we establish a novel framework, \textbf{WIN} rate \textbf{D}ominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization that approximates iterative BOND in the parameter space. We provides provable sample efficiency guarantee for one of the WIND variant with the square loss objective. The experimental results confirm that our algorithm not only accelerates the computation, but also achieves superior sample efficiency compared to existing methods.} }
Endnote
%0 Conference Paper %T Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment %A Tong Yang %A Jincheng Mei %A Hanjun Dai %A Zixin Wen %A Shicong Cen %A Dale Schuurmans %A Yuejie Chi %A Bo Dai %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-yang25e %I PMLR %P 4537--4545 %U https://proceedings.mlr.press/v258/yang25e.html %V 258 %X Recent advances in aligning large language models with human preferences have corroborated the growing importance of best-of-$N$ distillation (BOND). However, the iterative BOND algorithm is prohibitively expensive in practice due to the sample and computation inefficiency. This paper addresses the problem by revealing a unified game-theoretic connection between iterative BOND and self-play alignment, which unifies seemingly disparate algorithmic paradigms. Based on the connection, we establish a novel framework, \textbf{WIN} rate \textbf{D}ominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization that approximates iterative BOND in the parameter space. We provides provable sample efficiency guarantee for one of the WIND variant with the square loss objective. The experimental results confirm that our algorithm not only accelerates the computation, but also achieves superior sample efficiency compared to existing methods.
APA
Yang, T., Mei, J., Dai, H., Wen, Z., Cen, S., Schuurmans, D., Chi, Y. & Dai, B.. (2025). Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4537-4545 Available from https://proceedings.mlr.press/v258/yang25e.html.

Related Material