Educating Text Autoencoders: Latent Representation Guidance via Denoising

Tianxiao Shen, Jonas Mueller, Dr.Regina Barzilay, Tommi Jaakkola
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8719-8729, 2020.

Abstract

Generative autoencoders offer a promising approach for controllable text generation by leveraging their learned sentence representations. However, current models struggle to maintain coherent latent spaces required to perform meaningful text manipulations via latent vector operations. Specifically, we demonstrate by example that neural encoders do not necessarily map similar sentences to nearby latent vectors. A theoretical explanation for this phenomenon establishes that high-capacity autoencoders can learn an arbitrary mapping between sequences and associated latent representations. To remedy this issue, we augment adversarial autoencoders with a denoising objective where original sentences are reconstructed from perturbed versions (referred to as DAAE). We prove that this simple modification guides the latent space geometry of the resulting model by encouraging the encoder to map similar texts to similar latent representations. In empirical comparisons with various types of autoencoders, our model provides the best trade-off between generation quality and reconstruction capacity. Moreover, the improved geometry of the DAAE latent space enables \emph{zero-shot} text style transfer via simple latent vector arithmetic.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-shen20c, title = {Educating Text Autoencoders: Latent Representation Guidance via Denoising}, author = {Shen, Tianxiao and Mueller, Jonas and Barzilay, Dr.Regina and Jaakkola, Tommi}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8719--8729}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/shen20c/shen20c.pdf}, url = {http://proceedings.mlr.press/v119/shen20c.html}, abstract = {Generative autoencoders offer a promising approach for controllable text generation by leveraging their learned sentence representations. However, current models struggle to maintain coherent latent spaces required to perform meaningful text manipulations via latent vector operations. Specifically, we demonstrate by example that neural encoders do not necessarily map similar sentences to nearby latent vectors. A theoretical explanation for this phenomenon establishes that high-capacity autoencoders can learn an arbitrary mapping between sequences and associated latent representations. To remedy this issue, we augment adversarial autoencoders with a denoising objective where original sentences are reconstructed from perturbed versions (referred to as DAAE). We prove that this simple modification guides the latent space geometry of the resulting model by encouraging the encoder to map similar texts to similar latent representations. In empirical comparisons with various types of autoencoders, our model provides the best trade-off between generation quality and reconstruction capacity. Moreover, the improved geometry of the DAAE latent space enables \emph{zero-shot} text style transfer via simple latent vector arithmetic.} }
Endnote
%0 Conference Paper %T Educating Text Autoencoders: Latent Representation Guidance via Denoising %A Tianxiao Shen %A Jonas Mueller %A Dr.Regina Barzilay %A Tommi Jaakkola %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-shen20c %I PMLR %P 8719--8729 %U http://proceedings.mlr.press/v119/shen20c.html %V 119 %X Generative autoencoders offer a promising approach for controllable text generation by leveraging their learned sentence representations. However, current models struggle to maintain coherent latent spaces required to perform meaningful text manipulations via latent vector operations. Specifically, we demonstrate by example that neural encoders do not necessarily map similar sentences to nearby latent vectors. A theoretical explanation for this phenomenon establishes that high-capacity autoencoders can learn an arbitrary mapping between sequences and associated latent representations. To remedy this issue, we augment adversarial autoencoders with a denoising objective where original sentences are reconstructed from perturbed versions (referred to as DAAE). We prove that this simple modification guides the latent space geometry of the resulting model by encouraging the encoder to map similar texts to similar latent representations. In empirical comparisons with various types of autoencoders, our model provides the best trade-off between generation quality and reconstruction capacity. Moreover, the improved geometry of the DAAE latent space enables \emph{zero-shot} text style transfer via simple latent vector arithmetic.
APA
Shen, T., Mueller, J., Barzilay, D. & Jaakkola, T.. (2020). Educating Text Autoencoders: Latent Representation Guidance via Denoising. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8719-8729 Available from http://proceedings.mlr.press/v119/shen20c.html.

Related Material