EvTexture: Event-driven Texture Enhancement for Video Super-Resolution

Dachun Kai, Jiayao Lu, Yueyi Zhang, Xiaoyan Sun
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:22817-22839, 2024.

Abstract

Event-based vision has drawn increasing attention due to its unique characteristics, such as high temporal resolution and high dynamic range. It has been used in video super-resolution (VSR) recently to enhance the flow estimation and temporal alignment. Rather than for motion learning, we propose in this paper the first VSR method that utilizes event signals for texture enhancement. Our method, called EvTexture, leverages high-frequency details of events to better recover texture regions in VSR. In our EvTexture, a new texture enhancement branch is presented. We further introduce an iterative texture enhancement module to progressively explore the high-temporal-resolution event information for texture restoration. This allows for gradual refinement of texture regions across multiple iterations, leading to more accurate and rich high-resolution details. Experimental results show that our EvTexture achieves state-of-the-art performance on four datasets. For the Vid4 dataset with rich textures, our method can get up to 4.67dB gain compared with recent event-based methods. Code: https://github.com/DachunKai/EvTexture.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-kai24a, title = {{E}v{T}exture: Event-driven Texture Enhancement for Video Super-Resolution}, author = {Kai, Dachun and Lu, Jiayao and Zhang, Yueyi and Sun, Xiaoyan}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {22817--22839}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/kai24a/kai24a.pdf}, url = {https://proceedings.mlr.press/v235/kai24a.html}, abstract = {Event-based vision has drawn increasing attention due to its unique characteristics, such as high temporal resolution and high dynamic range. It has been used in video super-resolution (VSR) recently to enhance the flow estimation and temporal alignment. Rather than for motion learning, we propose in this paper the first VSR method that utilizes event signals for texture enhancement. Our method, called EvTexture, leverages high-frequency details of events to better recover texture regions in VSR. In our EvTexture, a new texture enhancement branch is presented. We further introduce an iterative texture enhancement module to progressively explore the high-temporal-resolution event information for texture restoration. This allows for gradual refinement of texture regions across multiple iterations, leading to more accurate and rich high-resolution details. Experimental results show that our EvTexture achieves state-of-the-art performance on four datasets. For the Vid4 dataset with rich textures, our method can get up to 4.67dB gain compared with recent event-based methods. Code: https://github.com/DachunKai/EvTexture.} }
Endnote
%0 Conference Paper %T EvTexture: Event-driven Texture Enhancement for Video Super-Resolution %A Dachun Kai %A Jiayao Lu %A Yueyi Zhang %A Xiaoyan Sun %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-kai24a %I PMLR %P 22817--22839 %U https://proceedings.mlr.press/v235/kai24a.html %V 235 %X Event-based vision has drawn increasing attention due to its unique characteristics, such as high temporal resolution and high dynamic range. It has been used in video super-resolution (VSR) recently to enhance the flow estimation and temporal alignment. Rather than for motion learning, we propose in this paper the first VSR method that utilizes event signals for texture enhancement. Our method, called EvTexture, leverages high-frequency details of events to better recover texture regions in VSR. In our EvTexture, a new texture enhancement branch is presented. We further introduce an iterative texture enhancement module to progressively explore the high-temporal-resolution event information for texture restoration. This allows for gradual refinement of texture regions across multiple iterations, leading to more accurate and rich high-resolution details. Experimental results show that our EvTexture achieves state-of-the-art performance on four datasets. For the Vid4 dataset with rich textures, our method can get up to 4.67dB gain compared with recent event-based methods. Code: https://github.com/DachunKai/EvTexture.
APA
Kai, D., Lu, J., Zhang, Y. & Sun, X.. (2024). EvTexture: Event-driven Texture Enhancement for Video Super-Resolution. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:22817-22839 Available from https://proceedings.mlr.press/v235/kai24a.html.

Related Material