Countering Language Drift with Seeded Iterated Learning

Yuchen Lu, Soumye Singhal, Florian Strub, Aaron Courville, Olivier Pietquin
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6437-6447, 2020.

Abstract

Pretraining on human corpus and then finetuning in a simulator has become a standard pipeline for training a goal-oriented dialogue agent. Nevertheless, as soon as the agents are finetuned to maximize task completion, they suffer from the so-called language drift phenomenon: they slowly lose syntactic and semantic properties of language as they only focus on solving the task. In this paper, we propose a generic approach to counter language drift called Seeded iterated learning (SIL). We periodically refine a pretrained student agent by imitating data sampled from a newly generated teacher agent. At each time step, the teacher is created by copying the student agent, before being finetuned to maximize task completion. SIL does not require external syntactic constraint nor semantic knowledge, making it a valuable task-agnostic finetuning protocol. We evaluate SIL in a toy-setting Lewis Game, and then scale it up to the translation game with natural language. In both settings, SIL helps counter language drift as well as it improves the task completion compared to baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-lu20c, title = {Countering Language Drift with Seeded Iterated Learning}, author = {Lu, Yuchen and Singhal, Soumye and Strub, Florian and Courville, Aaron and Pietquin, Olivier}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6437--6447}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/lu20c/lu20c.pdf}, url = {https://proceedings.mlr.press/v119/lu20c.html}, abstract = {Pretraining on human corpus and then finetuning in a simulator has become a standard pipeline for training a goal-oriented dialogue agent. Nevertheless, as soon as the agents are finetuned to maximize task completion, they suffer from the so-called language drift phenomenon: they slowly lose syntactic and semantic properties of language as they only focus on solving the task. In this paper, we propose a generic approach to counter language drift called Seeded iterated learning (SIL). We periodically refine a pretrained student agent by imitating data sampled from a newly generated teacher agent. At each time step, the teacher is created by copying the student agent, before being finetuned to maximize task completion. SIL does not require external syntactic constraint nor semantic knowledge, making it a valuable task-agnostic finetuning protocol. We evaluate SIL in a toy-setting Lewis Game, and then scale it up to the translation game with natural language. In both settings, SIL helps counter language drift as well as it improves the task completion compared to baselines.} }
Endnote
%0 Conference Paper %T Countering Language Drift with Seeded Iterated Learning %A Yuchen Lu %A Soumye Singhal %A Florian Strub %A Aaron Courville %A Olivier Pietquin %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-lu20c %I PMLR %P 6437--6447 %U https://proceedings.mlr.press/v119/lu20c.html %V 119 %X Pretraining on human corpus and then finetuning in a simulator has become a standard pipeline for training a goal-oriented dialogue agent. Nevertheless, as soon as the agents are finetuned to maximize task completion, they suffer from the so-called language drift phenomenon: they slowly lose syntactic and semantic properties of language as they only focus on solving the task. In this paper, we propose a generic approach to counter language drift called Seeded iterated learning (SIL). We periodically refine a pretrained student agent by imitating data sampled from a newly generated teacher agent. At each time step, the teacher is created by copying the student agent, before being finetuned to maximize task completion. SIL does not require external syntactic constraint nor semantic knowledge, making it a valuable task-agnostic finetuning protocol. We evaluate SIL in a toy-setting Lewis Game, and then scale it up to the translation game with natural language. In both settings, SIL helps counter language drift as well as it improves the task completion compared to baselines.
APA
Lu, Y., Singhal, S., Strub, F., Courville, A. & Pietquin, O.. (2020). Countering Language Drift with Seeded Iterated Learning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6437-6447 Available from https://proceedings.mlr.press/v119/lu20c.html.

Related Material