Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control

Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, Douglas Eck
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1645-1654, 2017.

Abstract

This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-jaques17a, title = {Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with {KL}-control}, author = {Natasha Jaques and Shixiang Gu and Dzmitry Bahdanau and Jos{\'e} Miguel Hern{\'a}ndez-Lobato and Richard E. Turner and Douglas Eck}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1645--1654}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/jaques17a/jaques17a.pdf}, url = { http://proceedings.mlr.press/v70/jaques17a.html }, abstract = {This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.} }
Endnote
%0 Conference Paper %T Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control %A Natasha Jaques %A Shixiang Gu %A Dzmitry Bahdanau %A José Miguel Hernández-Lobato %A Richard E. Turner %A Douglas Eck %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-jaques17a %I PMLR %P 1645--1654 %U http://proceedings.mlr.press/v70/jaques17a.html %V 70 %X This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.
APA
Jaques, N., Gu, S., Bahdanau, D., Hernández-Lobato, J.M., Turner, R.E. & Eck, D.. (2017). Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1645-1654 Available from http://proceedings.mlr.press/v70/jaques17a.html .

Related Material