MASS: Masked Sequence to Sequence Pre-training for Language Generation

Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5926-5936, 2019.

Abstract

Pre-training and fine-tuning, e.g., BERT \citep{devlin2018bert}, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks. Inspired by the success of BERT, we propose MAsked Sequence to Sequence pre-training (MASS) for the encoder-decoder based language generation tasks. MASS adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. In this way, MASS can jointly train the encoder and decoder to develop the capability of representation extraction and language modeling. By further fine-tuning on a variety of zero/low-resource language generation tasks, including neural machine translation, text summarization and conversational response generation (3 tasks and totally 8 datasets), MASS achieves significant improvements over the baselines without pre-training or with other pre-training methods. Especially, we achieve the state-of-the-art accuracy (30.02 in terms of BLEU score) on the unsupervised English-French translation, even beating the early attention-based supervised model \citep{bahdanau2015neural}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-song19d, title = {{MASS}: Masked Sequence to Sequence Pre-training for Language Generation}, author = {Song, Kaitao and Tan, Xu and Qin, Tao and Lu, Jianfeng and Liu, Tie-Yan}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5926--5936}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/song19d/song19d.pdf}, url = {https://proceedings.mlr.press/v97/song19d.html}, abstract = {Pre-training and fine-tuning, e.g., BERT \citep{devlin2018bert}, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks. Inspired by the success of BERT, we propose MAsked Sequence to Sequence pre-training (MASS) for the encoder-decoder based language generation tasks. MASS adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. In this way, MASS can jointly train the encoder and decoder to develop the capability of representation extraction and language modeling. By further fine-tuning on a variety of zero/low-resource language generation tasks, including neural machine translation, text summarization and conversational response generation (3 tasks and totally 8 datasets), MASS achieves significant improvements over the baselines without pre-training or with other pre-training methods. Especially, we achieve the state-of-the-art accuracy (30.02 in terms of BLEU score) on the unsupervised English-French translation, even beating the early attention-based supervised model \citep{bahdanau2015neural}.} }
Endnote
%0 Conference Paper %T MASS: Masked Sequence to Sequence Pre-training for Language Generation %A Kaitao Song %A Xu Tan %A Tao Qin %A Jianfeng Lu %A Tie-Yan Liu %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-song19d %I PMLR %P 5926--5936 %U https://proceedings.mlr.press/v97/song19d.html %V 97 %X Pre-training and fine-tuning, e.g., BERT \citep{devlin2018bert}, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks. Inspired by the success of BERT, we propose MAsked Sequence to Sequence pre-training (MASS) for the encoder-decoder based language generation tasks. MASS adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. In this way, MASS can jointly train the encoder and decoder to develop the capability of representation extraction and language modeling. By further fine-tuning on a variety of zero/low-resource language generation tasks, including neural machine translation, text summarization and conversational response generation (3 tasks and totally 8 datasets), MASS achieves significant improvements over the baselines without pre-training or with other pre-training methods. Especially, we achieve the state-of-the-art accuracy (30.02 in terms of BLEU score) on the unsupervised English-French translation, even beating the early attention-based supervised model \citep{bahdanau2015neural}.
APA
Song, K., Tan, X., Qin, T., Lu, J. & Liu, T.. (2019). MASS: Masked Sequence to Sequence Pre-training for Language Generation. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5926-5936 Available from https://proceedings.mlr.press/v97/song19d.html.

Related Material