Unsupervised Speech Decomposition via Triple Information Bottleneck

Kaizhi Qian, Yang Zhang, Shiyu Chang, Mark Hasegawa-Johnson, David Cox
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7836-7846, 2020.

Abstract

Speech information can be roughly decomposed into four components: language content, timbre, pitch, and rhythm. Obtaining disentangled representations of these components is useful in many speech analysis and generation applications. Recently, state-of-the-art voice conversion systems have led to speech representations that can disentangle speaker-dependent and independent information. However, these systems can only disentangle timbre, while information about pitch, rhythm and content is still mixed together. Further disentangling the remaining speech components is an under-determined problem in the absence of explicit annotations for each component, which are difficult and expensive to obtain. In this paper, we propose SpeechSplit, which can blindly decompose speech into its four components by introducing three carefully designed information bottlenecks. SpeechSplit is among the first algorithms that can separately perform style transfer on timbre, pitch and rhythm without text labels. Our code is publicly available at https://github.com/auspicious3000/SpeechSplit.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-qian20a, title = {Unsupervised Speech Decomposition via Triple Information Bottleneck}, author = {Qian, Kaizhi and Zhang, Yang and Chang, Shiyu and Hasegawa-Johnson, Mark and Cox, David}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7836--7846}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/qian20a/qian20a.pdf}, url = { http://proceedings.mlr.press/v119/qian20a.html }, abstract = {Speech information can be roughly decomposed into four components: language content, timbre, pitch, and rhythm. Obtaining disentangled representations of these components is useful in many speech analysis and generation applications. Recently, state-of-the-art voice conversion systems have led to speech representations that can disentangle speaker-dependent and independent information. However, these systems can only disentangle timbre, while information about pitch, rhythm and content is still mixed together. Further disentangling the remaining speech components is an under-determined problem in the absence of explicit annotations for each component, which are difficult and expensive to obtain. In this paper, we propose SpeechSplit, which can blindly decompose speech into its four components by introducing three carefully designed information bottlenecks. SpeechSplit is among the first algorithms that can separately perform style transfer on timbre, pitch and rhythm without text labels. Our code is publicly available at https://github.com/auspicious3000/SpeechSplit.} }
Endnote
%0 Conference Paper %T Unsupervised Speech Decomposition via Triple Information Bottleneck %A Kaizhi Qian %A Yang Zhang %A Shiyu Chang %A Mark Hasegawa-Johnson %A David Cox %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-qian20a %I PMLR %P 7836--7846 %U http://proceedings.mlr.press/v119/qian20a.html %V 119 %X Speech information can be roughly decomposed into four components: language content, timbre, pitch, and rhythm. Obtaining disentangled representations of these components is useful in many speech analysis and generation applications. Recently, state-of-the-art voice conversion systems have led to speech representations that can disentangle speaker-dependent and independent information. However, these systems can only disentangle timbre, while information about pitch, rhythm and content is still mixed together. Further disentangling the remaining speech components is an under-determined problem in the absence of explicit annotations for each component, which are difficult and expensive to obtain. In this paper, we propose SpeechSplit, which can blindly decompose speech into its four components by introducing three carefully designed information bottlenecks. SpeechSplit is among the first algorithms that can separately perform style transfer on timbre, pitch and rhythm without text labels. Our code is publicly available at https://github.com/auspicious3000/SpeechSplit.
APA
Qian, K., Zhang, Y., Chang, S., Hasegawa-Johnson, M. & Cox, D.. (2020). Unsupervised Speech Decomposition via Triple Information Bottleneck. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7836-7846 Available from http://proceedings.mlr.press/v119/qian20a.html .

Related Material