Adversarial Mutual Information for Text Generation

Boyuan Pan, Yazheng Yang, Kaizhao Liang, Bhavya Kailkhura, Zhongming Jin, Xian-Sheng Hua, Deng Cai, Bo Li
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7476-7486, 2020.

Abstract

Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation. However, previous works paid little attention to modeling the backward network of MI (i.e., dependency from the target to the source), which is crucial to the tightness of the variational information maximization lower bound. In this paper, we propose Adversarial Mutual Information (AMI): a text generation framework which is formed as a novel saddle point (min-max) optimization aiming to identify joint interactions between the source and target. Within this framework, the forward and backward networks are able to iteratively promote or demote each other’s generated instances by comparing the real and synthetic data distributions. We also develop a latent noise sampling strategy that leverages random variations at the high-level semantic space to enhance the long term dependency in the generation process. Extensive experiments based on different text generation tasks demonstrate that the proposed AMI framework can significantly outperform several strong baselines, and we also show that AMI has potential to lead to a tighter lower bound of maximum mutual information for the variational information maximization problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-pan20a, title = {Adversarial Mutual Information for Text Generation}, author = {Pan, Boyuan and Yang, Yazheng and Liang, Kaizhao and Kailkhura, Bhavya and Jin, Zhongming and Hua, Xian-Sheng and Cai, Deng and Li, Bo}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7476--7486}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/pan20a/pan20a.pdf}, url = {https://proceedings.mlr.press/v119/pan20a.html}, abstract = {Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation. However, previous works paid little attention to modeling the backward network of MI (i.e., dependency from the target to the source), which is crucial to the tightness of the variational information maximization lower bound. In this paper, we propose Adversarial Mutual Information (AMI): a text generation framework which is formed as a novel saddle point (min-max) optimization aiming to identify joint interactions between the source and target. Within this framework, the forward and backward networks are able to iteratively promote or demote each other’s generated instances by comparing the real and synthetic data distributions. We also develop a latent noise sampling strategy that leverages random variations at the high-level semantic space to enhance the long term dependency in the generation process. Extensive experiments based on different text generation tasks demonstrate that the proposed AMI framework can significantly outperform several strong baselines, and we also show that AMI has potential to lead to a tighter lower bound of maximum mutual information for the variational information maximization problem.} }
Endnote
%0 Conference Paper %T Adversarial Mutual Information for Text Generation %A Boyuan Pan %A Yazheng Yang %A Kaizhao Liang %A Bhavya Kailkhura %A Zhongming Jin %A Xian-Sheng Hua %A Deng Cai %A Bo Li %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-pan20a %I PMLR %P 7476--7486 %U https://proceedings.mlr.press/v119/pan20a.html %V 119 %X Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation. However, previous works paid little attention to modeling the backward network of MI (i.e., dependency from the target to the source), which is crucial to the tightness of the variational information maximization lower bound. In this paper, we propose Adversarial Mutual Information (AMI): a text generation framework which is formed as a novel saddle point (min-max) optimization aiming to identify joint interactions between the source and target. Within this framework, the forward and backward networks are able to iteratively promote or demote each other’s generated instances by comparing the real and synthetic data distributions. We also develop a latent noise sampling strategy that leverages random variations at the high-level semantic space to enhance the long term dependency in the generation process. Extensive experiments based on different text generation tasks demonstrate that the proposed AMI framework can significantly outperform several strong baselines, and we also show that AMI has potential to lead to a tighter lower bound of maximum mutual information for the variational information maximization problem.
APA
Pan, B., Yang, Y., Liang, K., Kailkhura, B., Jin, Z., Hua, X., Cai, D. & Li, B.. (2020). Adversarial Mutual Information for Text Generation. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7476-7486 Available from https://proceedings.mlr.press/v119/pan20a.html.

Related Material