On the Relation between Quality-Diversity Evaluation and Distribution-Fitting Goal in Text Generation

Jianing Li, Yanyan Lan, Jiafeng Guo, Xueqi Cheng
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5905-5915, 2020.

Abstract

The goal of text generation models is to fit the underlying real probability distribution of text. For performance evaluation, quality and diversity metrics are usually applied. However, it is still not clear to what extend can the quality-diversity evaluation reflect the distribution-fitting goal. In this paper, we try to reveal such relation in a theoretical approach. We prove that under certain conditions, a linear combination of quality and diversity constitutes a divergence metric between the generated distribution and the real distribution. We also show that the commonly used BLEU/Self-BLEU metric pair fails to match any divergence metric, thus propose CR/NRR as a substitute for quality/diversity metric pair.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-li20h, title = {On the Relation between Quality-Diversity Evaluation and Distribution-Fitting Goal in Text Generation}, author = {Li, Jianing and Lan, Yanyan and Guo, Jiafeng and Cheng, Xueqi}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5905--5915}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/li20h/li20h.pdf}, url = {https://proceedings.mlr.press/v119/li20h.html}, abstract = {The goal of text generation models is to fit the underlying real probability distribution of text. For performance evaluation, quality and diversity metrics are usually applied. However, it is still not clear to what extend can the quality-diversity evaluation reflect the distribution-fitting goal. In this paper, we try to reveal such relation in a theoretical approach. We prove that under certain conditions, a linear combination of quality and diversity constitutes a divergence metric between the generated distribution and the real distribution. We also show that the commonly used BLEU/Self-BLEU metric pair fails to match any divergence metric, thus propose CR/NRR as a substitute for quality/diversity metric pair.} }
Endnote
%0 Conference Paper %T On the Relation between Quality-Diversity Evaluation and Distribution-Fitting Goal in Text Generation %A Jianing Li %A Yanyan Lan %A Jiafeng Guo %A Xueqi Cheng %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-li20h %I PMLR %P 5905--5915 %U https://proceedings.mlr.press/v119/li20h.html %V 119 %X The goal of text generation models is to fit the underlying real probability distribution of text. For performance evaluation, quality and diversity metrics are usually applied. However, it is still not clear to what extend can the quality-diversity evaluation reflect the distribution-fitting goal. In this paper, we try to reveal such relation in a theoretical approach. We prove that under certain conditions, a linear combination of quality and diversity constitutes a divergence metric between the generated distribution and the real distribution. We also show that the commonly used BLEU/Self-BLEU metric pair fails to match any divergence metric, thus propose CR/NRR as a substitute for quality/diversity metric pair.
APA
Li, J., Lan, Y., Guo, J. & Cheng, X.. (2020). On the Relation between Quality-Diversity Evaluation and Distribution-Fitting Goal in Text Generation. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5905-5915 Available from https://proceedings.mlr.press/v119/li20h.html.

Related Material