SinFusion: Training Diffusion Models on a Single Image or Video

Yaniv Nikankin, Niv Haim, Michal Irani
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:26199-26214, 2023.

Abstract

Diffusion models exhibited tremendous progress in image and video generation, exceeding GANs in quality and diversity. However, they are usually trained on very large datasets and are not naturally adapted to manipulate a given input image or video. In this paper we show how this can be resolved by training a diffusion model on a single input image or video. Our image/video-specific diffusion model (SinFusion) learns the appearance and dynamics of the single image or video, while utilizing the conditioning capabilities of diffusion models. It can solve a wide array of image/video-specific manipulation tasks. In particular, our model can learn from few frames the motion and dynamics of a single input video. It can then generate diverse new video samples of the same dynamic scene, extrapolate short videos into long ones (both forward and backward in time) and perform video upsampling. Most of these tasks are not realizable by current video-specific generation methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-nikankin23a, title = {{S}in{F}usion: Training Diffusion Models on a Single Image or Video}, author = {Nikankin, Yaniv and Haim, Niv and Irani, Michal}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {26199--26214}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/nikankin23a/nikankin23a.pdf}, url = {https://proceedings.mlr.press/v202/nikankin23a.html}, abstract = {Diffusion models exhibited tremendous progress in image and video generation, exceeding GANs in quality and diversity. However, they are usually trained on very large datasets and are not naturally adapted to manipulate a given input image or video. In this paper we show how this can be resolved by training a diffusion model on a single input image or video. Our image/video-specific diffusion model (SinFusion) learns the appearance and dynamics of the single image or video, while utilizing the conditioning capabilities of diffusion models. It can solve a wide array of image/video-specific manipulation tasks. In particular, our model can learn from few frames the motion and dynamics of a single input video. It can then generate diverse new video samples of the same dynamic scene, extrapolate short videos into long ones (both forward and backward in time) and perform video upsampling. Most of these tasks are not realizable by current video-specific generation methods.} }
Endnote
%0 Conference Paper %T SinFusion: Training Diffusion Models on a Single Image or Video %A Yaniv Nikankin %A Niv Haim %A Michal Irani %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-nikankin23a %I PMLR %P 26199--26214 %U https://proceedings.mlr.press/v202/nikankin23a.html %V 202 %X Diffusion models exhibited tremendous progress in image and video generation, exceeding GANs in quality and diversity. However, they are usually trained on very large datasets and are not naturally adapted to manipulate a given input image or video. In this paper we show how this can be resolved by training a diffusion model on a single input image or video. Our image/video-specific diffusion model (SinFusion) learns the appearance and dynamics of the single image or video, while utilizing the conditioning capabilities of diffusion models. It can solve a wide array of image/video-specific manipulation tasks. In particular, our model can learn from few frames the motion and dynamics of a single input video. It can then generate diverse new video samples of the same dynamic scene, extrapolate short videos into long ones (both forward and backward in time) and perform video upsampling. Most of these tasks are not realizable by current video-specific generation methods.
APA
Nikankin, Y., Haim, N. & Irani, M.. (2023). SinFusion: Training Diffusion Models on a Single Image or Video. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:26199-26214 Available from https://proceedings.mlr.press/v202/nikankin23a.html.

Related Material