Federated Learning for Feature Generalization with Convex Constraints

Dongwon Kim, Donghee Kim, Sung Kuk Shyn, Kwangsu Kim
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:30248-30266, 2025.

Abstract

Federated learning (FL) often struggles with generalization due to heterogeneous client data. Local models are prone to overfitting their local data distributions, and even transferable features can be distorted during aggregation. To address these challenges, we propose FedCONST, an approach that adaptively modulates update magnitudes based on the global model’s parameter strength. This prevents over-emphasizing well-learned parameters while reinforcing underdeveloped ones. Specifically, FedCONST employs linear convex constraints to ensure training stability and preserve locally learned generalization capabilities during aggregation. A Gradient Signal-to-Noise Ratio (GSNR) analysis further validates FedCONST’s effectiveness in enhancing feature transferability and robustness. As a result, FedCONST effectively aligns local and global objectives, mitigating overfitting and promoting stronger generalization across diverse FL environments, achieving state-of-the-art performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kim25o, title = {Federated Learning for Feature Generalization with Convex Constraints}, author = {Kim, Dongwon and Kim, Donghee and Shyn, Sung Kuk and Kim, Kwangsu}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {30248--30266}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kim25o/kim25o.pdf}, url = {https://proceedings.mlr.press/v267/kim25o.html}, abstract = {Federated learning (FL) often struggles with generalization due to heterogeneous client data. Local models are prone to overfitting their local data distributions, and even transferable features can be distorted during aggregation. To address these challenges, we propose FedCONST, an approach that adaptively modulates update magnitudes based on the global model’s parameter strength. This prevents over-emphasizing well-learned parameters while reinforcing underdeveloped ones. Specifically, FedCONST employs linear convex constraints to ensure training stability and preserve locally learned generalization capabilities during aggregation. A Gradient Signal-to-Noise Ratio (GSNR) analysis further validates FedCONST’s effectiveness in enhancing feature transferability and robustness. As a result, FedCONST effectively aligns local and global objectives, mitigating overfitting and promoting stronger generalization across diverse FL environments, achieving state-of-the-art performance.} }
Endnote
%0 Conference Paper %T Federated Learning for Feature Generalization with Convex Constraints %A Dongwon Kim %A Donghee Kim %A Sung Kuk Shyn %A Kwangsu Kim %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kim25o %I PMLR %P 30248--30266 %U https://proceedings.mlr.press/v267/kim25o.html %V 267 %X Federated learning (FL) often struggles with generalization due to heterogeneous client data. Local models are prone to overfitting their local data distributions, and even transferable features can be distorted during aggregation. To address these challenges, we propose FedCONST, an approach that adaptively modulates update magnitudes based on the global model’s parameter strength. This prevents over-emphasizing well-learned parameters while reinforcing underdeveloped ones. Specifically, FedCONST employs linear convex constraints to ensure training stability and preserve locally learned generalization capabilities during aggregation. A Gradient Signal-to-Noise Ratio (GSNR) analysis further validates FedCONST’s effectiveness in enhancing feature transferability and robustness. As a result, FedCONST effectively aligns local and global objectives, mitigating overfitting and promoting stronger generalization across diverse FL environments, achieving state-of-the-art performance.
APA
Kim, D., Kim, D., Shyn, S.K. & Kim, K.. (2025). Federated Learning for Feature Generalization with Convex Constraints. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:30248-30266 Available from https://proceedings.mlr.press/v267/kim25o.html.

Related Material