Enhanced Label Propagation through Affinity Matrix Fusion for Source-Free Domain Adaptation

Li Guo, Yuxuan Xia, Shengjie Wang
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:732-749, 2025.

Abstract

Source-free domain adaptation (SFDA) has gained significant attention as a method to transfer knowledge from a pre-trained model on source domains toward target domains without accessing the source data. Recent research in SFDA has predominately adopted a self-training paradigm, focusing on utilizing local consistency constraints to refine pseudo-labels during self-training. These methods aim to encourage similar predictions among samples residing in local neighborhoods. However, despite their effectiveness, the significance of global consistency is often overlooked in such approaches. Moreover, such self-training based adaptation processes suffer from the “confirmation bias”: models use self-generated sub-optimal pseudo-labels to guide their subsequent training, resulting in a loop of self-reinforcing errors. In this study, we address the aforementioned limitations through two key contributions. Firstly, we introduce a label propagation method that seamlessly enforces both local and global consistency, leading to more coherent label predictions within the target domain. Secondly, to mitigate the “confirmation bias", we aggregate the affinity matrix derived from current and historical models during the label propagation process. This approach takes advantage of different snapshots of the model to obtain a more accurate representation of the underlying graph structure, significantly enhancing the efficacy of label propagation and resulting in more refined pseudo-labels. Extensive experimental evaluations demonstrate the superiority of our approach over existing methods by a large margin. Our findings not only highlight the significance of incorporating global consistency within the SFDA framework but also offer a novel approach to mitigate the confirmation bias that arises from the use of noisy pseudo-labels in the self-training paradigm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v274-guo25b, title = {Enhanced Label Propagation through Affinity Matrix Fusion for Source-Free Domain Adaptation}, author = {Guo, Li and Xia, Yuxuan and Wang, Shengjie}, booktitle = {Proceedings of The 3rd Conference on Lifelong Learning Agents}, pages = {732--749}, year = {2025}, editor = {Lomonaco, Vincenzo and Melacci, Stefano and Tuytelaars, Tinne and Chandar, Sarath and Pascanu, Razvan}, volume = {274}, series = {Proceedings of Machine Learning Research}, month = {29 Jul--01 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v274/main/assets/guo25b/guo25b.pdf}, url = {https://proceedings.mlr.press/v274/guo25b.html}, abstract = {Source-free domain adaptation (SFDA) has gained significant attention as a method to transfer knowledge from a pre-trained model on source domains toward target domains without accessing the source data. Recent research in SFDA has predominately adopted a self-training paradigm, focusing on utilizing local consistency constraints to refine pseudo-labels during self-training. These methods aim to encourage similar predictions among samples residing in local neighborhoods. However, despite their effectiveness, the significance of global consistency is often overlooked in such approaches. Moreover, such self-training based adaptation processes suffer from the “confirmation bias”: models use self-generated sub-optimal pseudo-labels to guide their subsequent training, resulting in a loop of self-reinforcing errors. In this study, we address the aforementioned limitations through two key contributions. Firstly, we introduce a label propagation method that seamlessly enforces both local and global consistency, leading to more coherent label predictions within the target domain. Secondly, to mitigate the “confirmation bias", we aggregate the affinity matrix derived from current and historical models during the label propagation process. This approach takes advantage of different snapshots of the model to obtain a more accurate representation of the underlying graph structure, significantly enhancing the efficacy of label propagation and resulting in more refined pseudo-labels. Extensive experimental evaluations demonstrate the superiority of our approach over existing methods by a large margin. Our findings not only highlight the significance of incorporating global consistency within the SFDA framework but also offer a novel approach to mitigate the confirmation bias that arises from the use of noisy pseudo-labels in the self-training paradigm.} }
Endnote
%0 Conference Paper %T Enhanced Label Propagation through Affinity Matrix Fusion for Source-Free Domain Adaptation %A Li Guo %A Yuxuan Xia %A Shengjie Wang %B Proceedings of The 3rd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2025 %E Vincenzo Lomonaco %E Stefano Melacci %E Tinne Tuytelaars %E Sarath Chandar %E Razvan Pascanu %F pmlr-v274-guo25b %I PMLR %P 732--749 %U https://proceedings.mlr.press/v274/guo25b.html %V 274 %X Source-free domain adaptation (SFDA) has gained significant attention as a method to transfer knowledge from a pre-trained model on source domains toward target domains without accessing the source data. Recent research in SFDA has predominately adopted a self-training paradigm, focusing on utilizing local consistency constraints to refine pseudo-labels during self-training. These methods aim to encourage similar predictions among samples residing in local neighborhoods. However, despite their effectiveness, the significance of global consistency is often overlooked in such approaches. Moreover, such self-training based adaptation processes suffer from the “confirmation bias”: models use self-generated sub-optimal pseudo-labels to guide their subsequent training, resulting in a loop of self-reinforcing errors. In this study, we address the aforementioned limitations through two key contributions. Firstly, we introduce a label propagation method that seamlessly enforces both local and global consistency, leading to more coherent label predictions within the target domain. Secondly, to mitigate the “confirmation bias", we aggregate the affinity matrix derived from current and historical models during the label propagation process. This approach takes advantage of different snapshots of the model to obtain a more accurate representation of the underlying graph structure, significantly enhancing the efficacy of label propagation and resulting in more refined pseudo-labels. Extensive experimental evaluations demonstrate the superiority of our approach over existing methods by a large margin. Our findings not only highlight the significance of incorporating global consistency within the SFDA framework but also offer a novel approach to mitigate the confirmation bias that arises from the use of noisy pseudo-labels in the self-training paradigm.
APA
Guo, L., Xia, Y. & Wang, S.. (2025). Enhanced Label Propagation through Affinity Matrix Fusion for Source-Free Domain Adaptation. Proceedings of The 3rd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 274:732-749 Available from https://proceedings.mlr.press/v274/guo25b.html.

Related Material