Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model

Fei Shen, Cong Wang, Junyao Gao, Qin Guo, Jisheng Dang, Jinhui Tang, Tat-Seng Chua
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:54499-54514, 2025.

Abstract

Recent advances in conditional diffusion models have shown promise for generating realistic TalkingFace videos, yet challenges persist in achieving consistent head movement, synchronized facial expressions, and accurate lip synchronization over extended generations. To address these, we introduce the Motion-priors Conditional Diffusion Model (MCDM), which utilizes both archived and current clip motion priors to enhance motion prediction and ensure temporal consistency. The model consists of three key elements: (1) an archived-clip motion-prior that incorporates historical frames and a reference frame to preserve identity and context; (2) a present-clip motion-prior diffusion model that captures multimodal causality for accurate predictions of head movements, lip sync, and expressions; and (3) a memory-efficient temporal attention mechanism that mitigates error accumulation by dynamically storing and updating motion features. We also introduce the TalkingFace-Wild dataset, a multilingual collection of over 200 hours of footage across 10 languages. Experimental results demonstrate the effectiveness of MCDM in maintaining identity and motion continuity for long-term TalkingFace generation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-shen25g, title = {Long-Term {T}alking{F}ace Generation via Motion-Prior Conditional Diffusion Model}, author = {Shen, Fei and Wang, Cong and Gao, Junyao and Guo, Qin and Dang, Jisheng and Tang, Jinhui and Chua, Tat-Seng}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {54499--54514}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/shen25g/shen25g.pdf}, url = {https://proceedings.mlr.press/v267/shen25g.html}, abstract = {Recent advances in conditional diffusion models have shown promise for generating realistic TalkingFace videos, yet challenges persist in achieving consistent head movement, synchronized facial expressions, and accurate lip synchronization over extended generations. To address these, we introduce the Motion-priors Conditional Diffusion Model (MCDM), which utilizes both archived and current clip motion priors to enhance motion prediction and ensure temporal consistency. The model consists of three key elements: (1) an archived-clip motion-prior that incorporates historical frames and a reference frame to preserve identity and context; (2) a present-clip motion-prior diffusion model that captures multimodal causality for accurate predictions of head movements, lip sync, and expressions; and (3) a memory-efficient temporal attention mechanism that mitigates error accumulation by dynamically storing and updating motion features. We also introduce the TalkingFace-Wild dataset, a multilingual collection of over 200 hours of footage across 10 languages. Experimental results demonstrate the effectiveness of MCDM in maintaining identity and motion continuity for long-term TalkingFace generation.} }
Endnote
%0 Conference Paper %T Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model %A Fei Shen %A Cong Wang %A Junyao Gao %A Qin Guo %A Jisheng Dang %A Jinhui Tang %A Tat-Seng Chua %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-shen25g %I PMLR %P 54499--54514 %U https://proceedings.mlr.press/v267/shen25g.html %V 267 %X Recent advances in conditional diffusion models have shown promise for generating realistic TalkingFace videos, yet challenges persist in achieving consistent head movement, synchronized facial expressions, and accurate lip synchronization over extended generations. To address these, we introduce the Motion-priors Conditional Diffusion Model (MCDM), which utilizes both archived and current clip motion priors to enhance motion prediction and ensure temporal consistency. The model consists of three key elements: (1) an archived-clip motion-prior that incorporates historical frames and a reference frame to preserve identity and context; (2) a present-clip motion-prior diffusion model that captures multimodal causality for accurate predictions of head movements, lip sync, and expressions; and (3) a memory-efficient temporal attention mechanism that mitigates error accumulation by dynamically storing and updating motion features. We also introduce the TalkingFace-Wild dataset, a multilingual collection of over 200 hours of footage across 10 languages. Experimental results demonstrate the effectiveness of MCDM in maintaining identity and motion continuity for long-term TalkingFace generation.
APA
Shen, F., Wang, C., Gao, J., Guo, Q., Dang, J., Tang, J. & Chua, T.. (2025). Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:54499-54514 Available from https://proceedings.mlr.press/v267/shen25g.html.

Related Material