Improving Open-Ended Text Generation via Adaptive Decoding

Wenhong Zhu, Hongkun Hao, Zhiwei He, Yiming Ai, Rui Wang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:62386-62404, 2024.

Abstract

Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascertain a sensible candidate set during generation. Specifically, we introduce an entropy-based metric called confidence and conceptualize determining the optimal candidate set as a confidence-increasing process. The rationality of including a token in the candidate set is assessed by leveraging the increment of confidence. Experimental results reveal that our method balances diversity and coherence well. The human evaluation shows that our method can generate human-preferred text. Additionally, our method can potentially improve the reasoning ability of language models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhu24d, title = {Improving Open-Ended Text Generation via Adaptive Decoding}, author = {Zhu, Wenhong and Hao, Hongkun and He, Zhiwei and Ai, Yiming and Wang, Rui}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {62386--62404}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24d/zhu24d.pdf}, url = {https://proceedings.mlr.press/v235/zhu24d.html}, abstract = {Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascertain a sensible candidate set during generation. Specifically, we introduce an entropy-based metric called confidence and conceptualize determining the optimal candidate set as a confidence-increasing process. The rationality of including a token in the candidate set is assessed by leveraging the increment of confidence. Experimental results reveal that our method balances diversity and coherence well. The human evaluation shows that our method can generate human-preferred text. Additionally, our method can potentially improve the reasoning ability of language models.} }
Endnote
%0 Conference Paper %T Improving Open-Ended Text Generation via Adaptive Decoding %A Wenhong Zhu %A Hongkun Hao %A Zhiwei He %A Yiming Ai %A Rui Wang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhu24d %I PMLR %P 62386--62404 %U https://proceedings.mlr.press/v235/zhu24d.html %V 235 %X Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascertain a sensible candidate set during generation. Specifically, we introduce an entropy-based metric called confidence and conceptualize determining the optimal candidate set as a confidence-increasing process. The rationality of including a token in the candidate set is assessed by leveraging the increment of confidence. Experimental results reveal that our method balances diversity and coherence well. The human evaluation shows that our method can generate human-preferred text. Additionally, our method can potentially improve the reasoning ability of language models.
APA
Zhu, W., Hao, H., He, Z., Ai, Y. & Wang, R.. (2024). Improving Open-Ended Text Generation via Adaptive Decoding. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:62386-62404 Available from https://proceedings.mlr.press/v235/zhu24d.html.

Related Material