ColorMamba: Towards High-quality NIR-to-RGB Spectral Translation with Mamba

Huiyu Zhai, Guang Jin, Xingxing Yang, Guosheng Kang
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:765-780, 2025.

Abstract

Translating NIR to the visible spectrum is challenging due to cross-domain complexities. Current models struggle to balance a broad receptive field with computational efficiency, limiting practical use. Although the Selective Structured State Space Model, especially the improved version, Mamba, excels in generative tasks by capturing long-range dependencies with linear complexity, its default approach of converting 2D images into 1D sequences neglects local context. In this work, we propose a simple but effective backbone, dubbed ColorMamba, which first introduces Mamba into spectral translation tasks. To explore global long-range dependencies and local context for efficient spectral translation, we introduce learnable padding tokens to enhance the distinction of image boundaries and prevent potential confusion within the sequence model. Furthermore, local convolutional enhancement and agent attention are designed to improve the vanilla Mamba. Moreover, we exploit the HSV color to provide multi-scale guidance in the reconstruction process for more accurate spectral translation. Extensive experiments show that our ColorMamba achieves a 1.02 improvement in terms of PSNR compared with the state-of-the-art method. Our code is available at https://github.com/AlexYangxx/ColorMamba/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-zhai25a, title = {{ColorMamba}: {T}owards High-quality NIR-to-RGB Spectral Translation with Mamba}, author = {Zhai, Huiyu and Jin, Guang and Yang, Xingxing and Kang, Guosheng}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {765--780}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/zhai25a/zhai25a.pdf}, url = {https://proceedings.mlr.press/v260/zhai25a.html}, abstract = {Translating NIR to the visible spectrum is challenging due to cross-domain complexities. Current models struggle to balance a broad receptive field with computational efficiency, limiting practical use. Although the Selective Structured State Space Model, especially the improved version, Mamba, excels in generative tasks by capturing long-range dependencies with linear complexity, its default approach of converting 2D images into 1D sequences neglects local context. In this work, we propose a simple but effective backbone, dubbed ColorMamba, which first introduces Mamba into spectral translation tasks. To explore global long-range dependencies and local context for efficient spectral translation, we introduce learnable padding tokens to enhance the distinction of image boundaries and prevent potential confusion within the sequence model. Furthermore, local convolutional enhancement and agent attention are designed to improve the vanilla Mamba. Moreover, we exploit the HSV color to provide multi-scale guidance in the reconstruction process for more accurate spectral translation. Extensive experiments show that our ColorMamba achieves a 1.02 improvement in terms of PSNR compared with the state-of-the-art method. Our code is available at https://github.com/AlexYangxx/ColorMamba/.} }
Endnote
%0 Conference Paper %T ColorMamba: Towards High-quality NIR-to-RGB Spectral Translation with Mamba %A Huiyu Zhai %A Guang Jin %A Xingxing Yang %A Guosheng Kang %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-zhai25a %I PMLR %P 765--780 %U https://proceedings.mlr.press/v260/zhai25a.html %V 260 %X Translating NIR to the visible spectrum is challenging due to cross-domain complexities. Current models struggle to balance a broad receptive field with computational efficiency, limiting practical use. Although the Selective Structured State Space Model, especially the improved version, Mamba, excels in generative tasks by capturing long-range dependencies with linear complexity, its default approach of converting 2D images into 1D sequences neglects local context. In this work, we propose a simple but effective backbone, dubbed ColorMamba, which first introduces Mamba into spectral translation tasks. To explore global long-range dependencies and local context for efficient spectral translation, we introduce learnable padding tokens to enhance the distinction of image boundaries and prevent potential confusion within the sequence model. Furthermore, local convolutional enhancement and agent attention are designed to improve the vanilla Mamba. Moreover, we exploit the HSV color to provide multi-scale guidance in the reconstruction process for more accurate spectral translation. Extensive experiments show that our ColorMamba achieves a 1.02 improvement in terms of PSNR compared with the state-of-the-art method. Our code is available at https://github.com/AlexYangxx/ColorMamba/.
APA
Zhai, H., Jin, G., Yang, X. & Kang, G.. (2025). ColorMamba: Towards High-quality NIR-to-RGB Spectral Translation with Mamba. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:765-780 Available from https://proceedings.mlr.press/v260/zhai25a.html.

Related Material