Cross-model Back-translated Distillation for Unsupervised Machine Translation

Xuan-Phi Nguyen, Shafiq Joty, Thanh-Tung Nguyen, Kui Wu, Ai Ti Aw
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:8073-8083, 2021.

Abstract

Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, CBD achieves the state of the art in the WMT’14 English-French, WMT’16 English-German and English-Romanian bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively. It also yields 1.5–3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-nguyen21c, title = {Cross-model Back-translated Distillation for Unsupervised Machine Translation}, author = {Nguyen, Xuan-Phi and Joty, Shafiq and Nguyen, Thanh-Tung and Wu, Kui and Aw, Ai Ti}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {8073--8083}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/nguyen21c/nguyen21c.pdf}, url = {https://proceedings.mlr.press/v139/nguyen21c.html}, abstract = {Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, CBD achieves the state of the art in the WMT’14 English-French, WMT’16 English-German and English-Romanian bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively. It also yields 1.5–3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.} }
Endnote
%0 Conference Paper %T Cross-model Back-translated Distillation for Unsupervised Machine Translation %A Xuan-Phi Nguyen %A Shafiq Joty %A Thanh-Tung Nguyen %A Kui Wu %A Ai Ti Aw %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-nguyen21c %I PMLR %P 8073--8083 %U https://proceedings.mlr.press/v139/nguyen21c.html %V 139 %X Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, CBD achieves the state of the art in the WMT’14 English-French, WMT’16 English-German and English-Romanian bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively. It also yields 1.5–3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.
APA
Nguyen, X., Joty, S., Nguyen, T., Wu, K. & Aw, A.T.. (2021). Cross-model Back-translated Distillation for Unsupervised Machine Translation. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:8073-8083 Available from https://proceedings.mlr.press/v139/nguyen21c.html.

Related Material