The Panaceas for Improving Low-Rank Decomposition in Communication-Efficient Federated Learning

Shiwei Li, Xiandi Luo, Haozhao Wang, Xing Tang, Shijie Xu, Weihong Luo, Yuhua Li, Xiuqiang He, Ruixuan Li
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:35536-35561, 2025.

Abstract

To improve the training efficiency of federated learning (FL), previous research has employed low-rank decomposition techniques to reduce communication overhead. In this paper, we seek to enhance the performance of these low-rank decomposition methods. Specifically, we focus on three key issues related to decomposition in FL: what to decompose, how to decompose, and how to aggregate. Subsequently, we introduce three novel techniques: Model Update Decomposition (MUD), Block-wise Kronecker Decomposition (BKD), and Aggregation-Aware Decomposition (AAD), each targeting a specific issue. These techniques are complementary and can be applied simultaneously to achieve optimal performance. Additionally, we provide a rigorous theoretical analysis to ensure the convergence of the proposed MUD. Extensive experimental results show that our approach achieves faster convergence and superior accuracy compared to relevant baseline methods. The code is available at https://github.com/Leopold1423/fedmud-icml25.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-li25bn, title = {The Panaceas for Improving Low-Rank Decomposition in Communication-Efficient Federated Learning}, author = {Li, Shiwei and Luo, Xiandi and Wang, Haozhao and Tang, Xing and Xu, Shijie and Luo, Weihong and Li, Yuhua and He, Xiuqiang and Li, Ruixuan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {35536--35561}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/li25bn/li25bn.pdf}, url = {https://proceedings.mlr.press/v267/li25bn.html}, abstract = {To improve the training efficiency of federated learning (FL), previous research has employed low-rank decomposition techniques to reduce communication overhead. In this paper, we seek to enhance the performance of these low-rank decomposition methods. Specifically, we focus on three key issues related to decomposition in FL: what to decompose, how to decompose, and how to aggregate. Subsequently, we introduce three novel techniques: Model Update Decomposition (MUD), Block-wise Kronecker Decomposition (BKD), and Aggregation-Aware Decomposition (AAD), each targeting a specific issue. These techniques are complementary and can be applied simultaneously to achieve optimal performance. Additionally, we provide a rigorous theoretical analysis to ensure the convergence of the proposed MUD. Extensive experimental results show that our approach achieves faster convergence and superior accuracy compared to relevant baseline methods. The code is available at https://github.com/Leopold1423/fedmud-icml25.} }
Endnote
%0 Conference Paper %T The Panaceas for Improving Low-Rank Decomposition in Communication-Efficient Federated Learning %A Shiwei Li %A Xiandi Luo %A Haozhao Wang %A Xing Tang %A Shijie Xu %A Weihong Luo %A Yuhua Li %A Xiuqiang He %A Ruixuan Li %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-li25bn %I PMLR %P 35536--35561 %U https://proceedings.mlr.press/v267/li25bn.html %V 267 %X To improve the training efficiency of federated learning (FL), previous research has employed low-rank decomposition techniques to reduce communication overhead. In this paper, we seek to enhance the performance of these low-rank decomposition methods. Specifically, we focus on three key issues related to decomposition in FL: what to decompose, how to decompose, and how to aggregate. Subsequently, we introduce three novel techniques: Model Update Decomposition (MUD), Block-wise Kronecker Decomposition (BKD), and Aggregation-Aware Decomposition (AAD), each targeting a specific issue. These techniques are complementary and can be applied simultaneously to achieve optimal performance. Additionally, we provide a rigorous theoretical analysis to ensure the convergence of the proposed MUD. Extensive experimental results show that our approach achieves faster convergence and superior accuracy compared to relevant baseline methods. The code is available at https://github.com/Leopold1423/fedmud-icml25.
APA
Li, S., Luo, X., Wang, H., Tang, X., Xu, S., Luo, W., Li, Y., He, X. & Li, R.. (2025). The Panaceas for Improving Low-Rank Decomposition in Communication-Efficient Federated Learning. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:35536-35561 Available from https://proceedings.mlr.press/v267/li25bn.html.

Related Material