FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning

Xiuhua Lu, Peng Li, Xuefeng Jiang
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:303-318, 2025.

Abstract

Federated learning offers a solution paradigm to the challenge of preserving privacy in distributed machine learning. However, datasets distributed across each client in the real world are inevitably heterogeneous, and if the datasets can be globally aggregated, they collectively exhibit long-tailed distribution, which greatly affects the performance of the model. The traditional approach to federated learning primarily addresses the heterogeneity of data among clients, yet it fails to address the phenomenon of class bias in global long-tailed data. This results in the trained model focusing on the head classes while neglecting the equally important tail classes. Consequently, it is essential to develop a methodology that can consider classes holistically. To address the above problems, we propose a new method called FedLF, which introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation. We compare seven different methods with varying degrees of data heterogeneity and long-tailed distribution. Extensive experiments on benchmark datasets CIFAR-10-LT and CIFAR-100-LT demonstrate that our approach effectively mitigates the problem of model performance degradation due to data heterogeneity and long-tailed distribution. our code is available at https://github.com/18sym/FedLF.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-lu25a, title = {{FedLF}: {A}daptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning}, author = {Lu, Xiuhua and Li, Peng and Jiang, Xuefeng}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {303--318}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/lu25a/lu25a.pdf}, url = {https://proceedings.mlr.press/v260/lu25a.html}, abstract = {Federated learning offers a solution paradigm to the challenge of preserving privacy in distributed machine learning. However, datasets distributed across each client in the real world are inevitably heterogeneous, and if the datasets can be globally aggregated, they collectively exhibit long-tailed distribution, which greatly affects the performance of the model. The traditional approach to federated learning primarily addresses the heterogeneity of data among clients, yet it fails to address the phenomenon of class bias in global long-tailed data. This results in the trained model focusing on the head classes while neglecting the equally important tail classes. Consequently, it is essential to develop a methodology that can consider classes holistically. To address the above problems, we propose a new method called FedLF, which introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation. We compare seven different methods with varying degrees of data heterogeneity and long-tailed distribution. Extensive experiments on benchmark datasets CIFAR-10-LT and CIFAR-100-LT demonstrate that our approach effectively mitigates the problem of model performance degradation due to data heterogeneity and long-tailed distribution. our code is available at https://github.com/18sym/FedLF.} }
Endnote
%0 Conference Paper %T FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning %A Xiuhua Lu %A Peng Li %A Xuefeng Jiang %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-lu25a %I PMLR %P 303--318 %U https://proceedings.mlr.press/v260/lu25a.html %V 260 %X Federated learning offers a solution paradigm to the challenge of preserving privacy in distributed machine learning. However, datasets distributed across each client in the real world are inevitably heterogeneous, and if the datasets can be globally aggregated, they collectively exhibit long-tailed distribution, which greatly affects the performance of the model. The traditional approach to federated learning primarily addresses the heterogeneity of data among clients, yet it fails to address the phenomenon of class bias in global long-tailed data. This results in the trained model focusing on the head classes while neglecting the equally important tail classes. Consequently, it is essential to develop a methodology that can consider classes holistically. To address the above problems, we propose a new method called FedLF, which introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation. We compare seven different methods with varying degrees of data heterogeneity and long-tailed distribution. Extensive experiments on benchmark datasets CIFAR-10-LT and CIFAR-100-LT demonstrate that our approach effectively mitigates the problem of model performance degradation due to data heterogeneity and long-tailed distribution. our code is available at https://github.com/18sym/FedLF.
APA
Lu, X., Li, P. & Jiang, X.. (2025). FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:303-318 Available from https://proceedings.mlr.press/v260/lu25a.html.

Related Material