Disentangled Sequential Autoencoder

Li Yingzhen, Stephan Mandt
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5670-5679, 2018.

Abstract

We present a VAE architecture for encoding and generating high dimensional sequential data, such as video or audio. Our deep generative model learns a latent representation of the data which is split into a static and dynamic part, allowing us to approximately disentangle latent time-dependent features (dynamics) from features which are preserved over time (content). This architecture gives us partial control over generating content and dynamics by conditioning on either one of these sets of features. In our experiments on artificially generated cartoon video clips and voice recordings, we show that we can convert the content of a given sequence into another one by such content swapping. For audio, this allows us to convert a male speaker into a female speaker and vice versa, while for video we can separately manipulate shapes and dynamics. Furthermore, we give empirical evidence for the hypothesis that stochastic RNNs as latent state models are more efficient at compressing and generating long sequences than deterministic ones, which may be relevant for applications in video compression.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-yingzhen18a, title = {Disentangled Sequential Autoencoder}, author = {Yingzhen, Li and Mandt, Stephan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5670--5679}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/yingzhen18a/yingzhen18a.pdf}, url = {https://proceedings.mlr.press/v80/yingzhen18a.html}, abstract = {We present a VAE architecture for encoding and generating high dimensional sequential data, such as video or audio. Our deep generative model learns a latent representation of the data which is split into a static and dynamic part, allowing us to approximately disentangle latent time-dependent features (dynamics) from features which are preserved over time (content). This architecture gives us partial control over generating content and dynamics by conditioning on either one of these sets of features. In our experiments on artificially generated cartoon video clips and voice recordings, we show that we can convert the content of a given sequence into another one by such content swapping. For audio, this allows us to convert a male speaker into a female speaker and vice versa, while for video we can separately manipulate shapes and dynamics. Furthermore, we give empirical evidence for the hypothesis that stochastic RNNs as latent state models are more efficient at compressing and generating long sequences than deterministic ones, which may be relevant for applications in video compression.} }
Endnote
%0 Conference Paper %T Disentangled Sequential Autoencoder %A Li Yingzhen %A Stephan Mandt %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-yingzhen18a %I PMLR %P 5670--5679 %U https://proceedings.mlr.press/v80/yingzhen18a.html %V 80 %X We present a VAE architecture for encoding and generating high dimensional sequential data, such as video or audio. Our deep generative model learns a latent representation of the data which is split into a static and dynamic part, allowing us to approximately disentangle latent time-dependent features (dynamics) from features which are preserved over time (content). This architecture gives us partial control over generating content and dynamics by conditioning on either one of these sets of features. In our experiments on artificially generated cartoon video clips and voice recordings, we show that we can convert the content of a given sequence into another one by such content swapping. For audio, this allows us to convert a male speaker into a female speaker and vice versa, while for video we can separately manipulate shapes and dynamics. Furthermore, we give empirical evidence for the hypothesis that stochastic RNNs as latent state models are more efficient at compressing and generating long sequences than deterministic ones, which may be relevant for applications in video compression.
APA
Yingzhen, L. & Mandt, S.. (2018). Disentangled Sequential Autoencoder. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5670-5679 Available from https://proceedings.mlr.press/v80/yingzhen18a.html.

Related Material