Improving Maximum Likelihood Training for Text Generation with Density Ratio Estimation

Yuxuan Song, Ning Miao, Hao Zhou, Lantao Yu, Mingxuan Wang, Lei Li
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:122-132, 2020.

Abstract

Autoregressive neural sequence generative models trained by Maximum Likelihood Estimation suffer the exposure bias problem in practical finite sample scenarios. The crux is that the number of training samples for Maximum Likelihood Estimation is usually limited and the input data distributions are different at training and inference stages. Many methods have been proposed to solve the above problem, which relies on sampling from the non-stationary model distribution and suffers from high variance or biased estimations. In this paper, we propose $\psi$-MLE, a new training scheme for autoregressive sequence generative models, which is effective and stable when operating at large sample space encountered in text generation. We derive our algorithm from a new perspective of self-augmentation and introduce bias correction with density ratio estimation. Extensive experimental results on synthetic data and real-world text generation tasks demonstrate that our method stably outperforms Maximum Likelihood Estimation and other state-of-the-art sequence generative models in terms of both quality and diversity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-song20a, title = {Improving Maximum Likelihood Training for Text Generation with Density Ratio Estimation}, author = {Song, Yuxuan and Miao, Ning and Zhou, Hao and Yu, Lantao and Wang, Mingxuan and Li, Lei}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {122--132}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/song20a/song20a.pdf}, url = {https://proceedings.mlr.press/v108/song20a.html}, abstract = {Autoregressive neural sequence generative models trained by Maximum Likelihood Estimation suffer the exposure bias problem in practical finite sample scenarios. The crux is that the number of training samples for Maximum Likelihood Estimation is usually limited and the input data distributions are different at training and inference stages. Many methods have been proposed to solve the above problem, which relies on sampling from the non-stationary model distribution and suffers from high variance or biased estimations. In this paper, we propose $\psi$-MLE, a new training scheme for autoregressive sequence generative models, which is effective and stable when operating at large sample space encountered in text generation. We derive our algorithm from a new perspective of self-augmentation and introduce bias correction with density ratio estimation. Extensive experimental results on synthetic data and real-world text generation tasks demonstrate that our method stably outperforms Maximum Likelihood Estimation and other state-of-the-art sequence generative models in terms of both quality and diversity.} }
Endnote
%0 Conference Paper %T Improving Maximum Likelihood Training for Text Generation with Density Ratio Estimation %A Yuxuan Song %A Ning Miao %A Hao Zhou %A Lantao Yu %A Mingxuan Wang %A Lei Li %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-song20a %I PMLR %P 122--132 %U https://proceedings.mlr.press/v108/song20a.html %V 108 %X Autoregressive neural sequence generative models trained by Maximum Likelihood Estimation suffer the exposure bias problem in practical finite sample scenarios. The crux is that the number of training samples for Maximum Likelihood Estimation is usually limited and the input data distributions are different at training and inference stages. Many methods have been proposed to solve the above problem, which relies on sampling from the non-stationary model distribution and suffers from high variance or biased estimations. In this paper, we propose $\psi$-MLE, a new training scheme for autoregressive sequence generative models, which is effective and stable when operating at large sample space encountered in text generation. We derive our algorithm from a new perspective of self-augmentation and introduce bias correction with density ratio estimation. Extensive experimental results on synthetic data and real-world text generation tasks demonstrate that our method stably outperforms Maximum Likelihood Estimation and other state-of-the-art sequence generative models in terms of both quality and diversity.
APA
Song, Y., Miao, N., Zhou, H., Yu, L., Wang, M. & Li, L.. (2020). Improving Maximum Likelihood Training for Text Generation with Density Ratio Estimation. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:122-132 Available from https://proceedings.mlr.press/v108/song20a.html.

Related Material