Visible-Infrared Person Re-Indentification via Feature Fusion and Deep Mutual Learning

Ziyang Lin, Banghai Wang
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:79-94, 2025.

Abstract

Visible-Infrared Person Re-Identification (VI-ReID) aims to retrieve a set of person images captured from both visible and infrared camera views. Addressing the challenge of modal differences between visible and infrared images, we propose a VI-ReID network based on Feature Fusion and Deep Mutual Learning (DML). To enhance the model’s robustness to color, we introduce a novel data augmentation method called Random Combination of Channels (RCC), which generates new images by randomly combining R, G, and B channels of visible images. Furthermore, to capture more informative features of individuals, we fuse the features from the middle layer of the network. To reduce the model’s dependence on global features, we employ a fusion branch as an auxiliary branch, facilitating synchronous learning of global and fusion branches through Deep Mutual Learning . Extensive experiments on the SYSU-MM01 and RegDB datasets validate the superiority of our method, showcasing its excellent performance when compared to other state-of-the-art approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-lin25a, title = {Visible-Infrared Person Re-Indentification via Feature Fusion and Deep Mutual Learning}, author = {Lin, Ziyang and Wang, Banghai}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {79--94}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/lin25a/lin25a.pdf}, url = {https://proceedings.mlr.press/v260/lin25a.html}, abstract = {Visible-Infrared Person Re-Identification (VI-ReID) aims to retrieve a set of person images captured from both visible and infrared camera views. Addressing the challenge of modal differences between visible and infrared images, we propose a VI-ReID network based on Feature Fusion and Deep Mutual Learning (DML). To enhance the model’s robustness to color, we introduce a novel data augmentation method called Random Combination of Channels (RCC), which generates new images by randomly combining R, G, and B channels of visible images. Furthermore, to capture more informative features of individuals, we fuse the features from the middle layer of the network. To reduce the model’s dependence on global features, we employ a fusion branch as an auxiliary branch, facilitating synchronous learning of global and fusion branches through Deep Mutual Learning . Extensive experiments on the SYSU-MM01 and RegDB datasets validate the superiority of our method, showcasing its excellent performance when compared to other state-of-the-art approaches.} }
Endnote
%0 Conference Paper %T Visible-Infrared Person Re-Indentification via Feature Fusion and Deep Mutual Learning %A Ziyang Lin %A Banghai Wang %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-lin25a %I PMLR %P 79--94 %U https://proceedings.mlr.press/v260/lin25a.html %V 260 %X Visible-Infrared Person Re-Identification (VI-ReID) aims to retrieve a set of person images captured from both visible and infrared camera views. Addressing the challenge of modal differences between visible and infrared images, we propose a VI-ReID network based on Feature Fusion and Deep Mutual Learning (DML). To enhance the model’s robustness to color, we introduce a novel data augmentation method called Random Combination of Channels (RCC), which generates new images by randomly combining R, G, and B channels of visible images. Furthermore, to capture more informative features of individuals, we fuse the features from the middle layer of the network. To reduce the model’s dependence on global features, we employ a fusion branch as an auxiliary branch, facilitating synchronous learning of global and fusion branches through Deep Mutual Learning . Extensive experiments on the SYSU-MM01 and RegDB datasets validate the superiority of our method, showcasing its excellent performance when compared to other state-of-the-art approaches.
APA
Lin, Z. & Wang, B.. (2025). Visible-Infrared Person Re-Indentification via Feature Fusion and Deep Mutual Learning. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:79-94 Available from https://proceedings.mlr.press/v260/lin25a.html.

Related Material