Edit-A-Video: Single Video Editing with Object-Aware Consistency

Chaehun Shin, Heeseung Kim, Che Hyun Lee, Sang-gil Lee, Sungroh Yoon
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:1215-1230, 2024.

Abstract

With advancements in text-to-image (TTI) models, text-to-video (TTV) models have recently been introduced. Motivated by approaches on TTV models adapting from diffusion-based TTI models, we suggest the text-guided video editing framework given only a pretrained TTI model and a single pair, which we term Edit-A-Video. The framework consists of two stages: (1) inflating the 2D model into the 3D model by appending temporal modules and tuning on the source video (2) inverting the source video into the noise and editing with target text through attention map injection. Each stage enables the temporal modeling and preservation of semantic attributes of the source video. One of the key challenges for video editing is a background inconsistency problem, where the regions unrelated to the edit suffer from undesirable and inconsistent temporal alterations. To mitigate this issue, we also introduce a novel mask blending method, termed as temporal-consistent blending (TC Blending). We improve previous mask blending methods to reflect the temporal consistency, ensuring that the area where the editing is applied exhibits smooth transition while also achieving spatio-temporal consistency of the unedited regions. We present extensive experimental results over various types of text and videos, and demonstrate the superiority of the proposed method compared to baselines in terms of background consistency, text alignment, and video editing quality. Our samples are available on https://editavideo.github.io.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-shin24a, title = {{Edit-A-Video}: {S}ingle Video Editing with Object-Aware Consistency}, author = {Shin, Chaehun and Kim, Heeseung and Lee, Che Hyun and Lee, Sang-gil and Yoon, Sungroh}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {1215--1230}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/shin24a/shin24a.pdf}, url = {https://proceedings.mlr.press/v222/shin24a.html}, abstract = {With advancements in text-to-image (TTI) models, text-to-video (TTV) models have recently been introduced. Motivated by approaches on TTV models adapting from diffusion-based TTI models, we suggest the text-guided video editing framework given only a pretrained TTI model and a single pair, which we term Edit-A-Video. The framework consists of two stages: (1) inflating the 2D model into the 3D model by appending temporal modules and tuning on the source video (2) inverting the source video into the noise and editing with target text through attention map injection. Each stage enables the temporal modeling and preservation of semantic attributes of the source video. One of the key challenges for video editing is a background inconsistency problem, where the regions unrelated to the edit suffer from undesirable and inconsistent temporal alterations. To mitigate this issue, we also introduce a novel mask blending method, termed as temporal-consistent blending (TC Blending). We improve previous mask blending methods to reflect the temporal consistency, ensuring that the area where the editing is applied exhibits smooth transition while also achieving spatio-temporal consistency of the unedited regions. We present extensive experimental results over various types of text and videos, and demonstrate the superiority of the proposed method compared to baselines in terms of background consistency, text alignment, and video editing quality. Our samples are available on https://editavideo.github.io.} }
Endnote
%0 Conference Paper %T Edit-A-Video: Single Video Editing with Object-Aware Consistency %A Chaehun Shin %A Heeseung Kim %A Che Hyun Lee %A Sang-gil Lee %A Sungroh Yoon %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-shin24a %I PMLR %P 1215--1230 %U https://proceedings.mlr.press/v222/shin24a.html %V 222 %X With advancements in text-to-image (TTI) models, text-to-video (TTV) models have recently been introduced. Motivated by approaches on TTV models adapting from diffusion-based TTI models, we suggest the text-guided video editing framework given only a pretrained TTI model and a single pair, which we term Edit-A-Video. The framework consists of two stages: (1) inflating the 2D model into the 3D model by appending temporal modules and tuning on the source video (2) inverting the source video into the noise and editing with target text through attention map injection. Each stage enables the temporal modeling and preservation of semantic attributes of the source video. One of the key challenges for video editing is a background inconsistency problem, where the regions unrelated to the edit suffer from undesirable and inconsistent temporal alterations. To mitigate this issue, we also introduce a novel mask blending method, termed as temporal-consistent blending (TC Blending). We improve previous mask blending methods to reflect the temporal consistency, ensuring that the area where the editing is applied exhibits smooth transition while also achieving spatio-temporal consistency of the unedited regions. We present extensive experimental results over various types of text and videos, and demonstrate the superiority of the proposed method compared to baselines in terms of background consistency, text alignment, and video editing quality. Our samples are available on https://editavideo.github.io.
APA
Shin, C., Kim, H., Lee, C.H., Lee, S. & Yoon, S.. (2024). Edit-A-Video: Single Video Editing with Object-Aware Consistency. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:1215-1230 Available from https://proceedings.mlr.press/v222/shin24a.html.

Related Material