A Joint Selective Mechanism for Abstractive Sentence Summarization

Junjie Fu, Gongshen Liu
Proceedings of The 10th Asian Conference on Machine Learning, PMLR 95:756-769, 2018.

Abstract

Sequence-to-sequence (Seq2Seq) learning framework has been widely used in many natural language processing (NLP) tasks, including abstractive summarization and machine translation (MT). However, abstractive summarization generates the output in a lossy manner, in comparison with MT which is almost loss-less. We model this by introducing a joint selective mechanism: (i) A selective gate is added after encoding phase of the Seq2Seq learning framework, which learns to tailor the original input information and generates a selected input representation. (ii) A selection loss function is also added to help our selective gate function well, which is computed by looking at the input and the output jointly. Experimental results show that our proposed model outperforms most of the baseline models and is comparable to the state-of-the-art model in automatic evaluations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v95-fu18a, title = {A Joint Selective Mechanism for Abstractive Sentence Summarization}, author = {Fu, Junjie and Liu, Gongshen}, booktitle = {Proceedings of The 10th Asian Conference on Machine Learning}, pages = {756--769}, year = {2018}, editor = {Zhu, Jun and Takeuchi, Ichiro}, volume = {95}, series = {Proceedings of Machine Learning Research}, month = {14--16 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v95/fu18a/fu18a.pdf}, url = {https://proceedings.mlr.press/v95/fu18a.html}, abstract = {Sequence-to-sequence (Seq2Seq) learning framework has been widely used in many natural language processing (NLP) tasks, including abstractive summarization and machine translation (MT). However, abstractive summarization generates the output in a lossy manner, in comparison with MT which is almost loss-less. We model this by introducing a joint selective mechanism: (i) A selective gate is added after encoding phase of the Seq2Seq learning framework, which learns to tailor the original input information and generates a selected input representation. (ii) A selection loss function is also added to help our selective gate function well, which is computed by looking at the input and the output jointly. Experimental results show that our proposed model outperforms most of the baseline models and is comparable to the state-of-the-art model in automatic evaluations.} }
Endnote
%0 Conference Paper %T A Joint Selective Mechanism for Abstractive Sentence Summarization %A Junjie Fu %A Gongshen Liu %B Proceedings of The 10th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jun Zhu %E Ichiro Takeuchi %F pmlr-v95-fu18a %I PMLR %P 756--769 %U https://proceedings.mlr.press/v95/fu18a.html %V 95 %X Sequence-to-sequence (Seq2Seq) learning framework has been widely used in many natural language processing (NLP) tasks, including abstractive summarization and machine translation (MT). However, abstractive summarization generates the output in a lossy manner, in comparison with MT which is almost loss-less. We model this by introducing a joint selective mechanism: (i) A selective gate is added after encoding phase of the Seq2Seq learning framework, which learns to tailor the original input information and generates a selected input representation. (ii) A selection loss function is also added to help our selective gate function well, which is computed by looking at the input and the output jointly. Experimental results show that our proposed model outperforms most of the baseline models and is comparable to the state-of-the-art model in automatic evaluations.
APA
Fu, J. & Liu, G.. (2018). A Joint Selective Mechanism for Abstractive Sentence Summarization. Proceedings of The 10th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 95:756-769 Available from https://proceedings.mlr.press/v95/fu18a.html.

Related Material