Online and Linear-Time Attention by Enforcing Monotonic Alignments

Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2837-2846, 2017.

Abstract

Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-to-sequence problems. However, the fact that soft attention mechanisms perform a pass over the entire input sequence when producing each element in the output sequence precludes their use in online settings and results in a quadratic time complexity. Based on the insight that the alignment between input and output sequence elements is monotonic in many problems of interest, we propose an end-to-end differentiable method for learning monotonic alignments which, at test time, enables computing attention online and in linear time. We validate our approach on sentence summarization, machine translation, and online speech recognition problems and achieve results competitive with existing sequence-to-sequence models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-raffel17a, title = {Online and Linear-Time Attention by Enforcing Monotonic Alignments}, author = {Colin Raffel and Minh-Thang Luong and Peter J. Liu and Ron J. Weiss and Douglas Eck}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2837--2846}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/raffel17a/raffel17a.pdf}, url = { http://proceedings.mlr.press/v70/raffel17a.html }, abstract = {Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-to-sequence problems. However, the fact that soft attention mechanisms perform a pass over the entire input sequence when producing each element in the output sequence precludes their use in online settings and results in a quadratic time complexity. Based on the insight that the alignment between input and output sequence elements is monotonic in many problems of interest, we propose an end-to-end differentiable method for learning monotonic alignments which, at test time, enables computing attention online and in linear time. We validate our approach on sentence summarization, machine translation, and online speech recognition problems and achieve results competitive with existing sequence-to-sequence models.} }
Endnote
%0 Conference Paper %T Online and Linear-Time Attention by Enforcing Monotonic Alignments %A Colin Raffel %A Minh-Thang Luong %A Peter J. Liu %A Ron J. Weiss %A Douglas Eck %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-raffel17a %I PMLR %P 2837--2846 %U http://proceedings.mlr.press/v70/raffel17a.html %V 70 %X Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-to-sequence problems. However, the fact that soft attention mechanisms perform a pass over the entire input sequence when producing each element in the output sequence precludes their use in online settings and results in a quadratic time complexity. Based on the insight that the alignment between input and output sequence elements is monotonic in many problems of interest, we propose an end-to-end differentiable method for learning monotonic alignments which, at test time, enables computing attention online and in linear time. We validate our approach on sentence summarization, machine translation, and online speech recognition problems and achieve results competitive with existing sequence-to-sequence models.
APA
Raffel, C., Luong, M., Liu, P.J., Weiss, R.J. & Eck, D.. (2017). Online and Linear-Time Attention by Enforcing Monotonic Alignments. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2837-2846 Available from http://proceedings.mlr.press/v70/raffel17a.html .

Related Material