COALA: A Practical and Vision-Centric Federated Learning Platform

Weiming Zhuang, Jian Xu, Chen Chen, Jingtao Li, Lingjuan Lyu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:62723-62742, 2024.

Abstract

We present COALA, a vision-centric Federated Learning (FL) platform, and a suite of benchmarks for practical FL scenarios, which we categorize as task, data, and model levels. At the task level, COALA extends support from simple classification to 15 computer vision tasks, including object detection, segmentation, pose estimation, and more. It also facilitates federated multiple-task learning, allowing clients to train on multiple tasks simultaneously. At the data level, COALA goes beyond supervised FL to benchmark both semi-supervised FL and unsupervised FL. It also benchmarks feature distribution shifts other than commonly considered label distribution shifts. In addition to dealing with static data, it supports federated continual learning for continuously changing data in real-world scenarios. At the model level, COALA benchmarks FL with split models and different models in different clients. COALA platform offers three degrees of customization for these practical FL scenarios, including configuration customization, components customization, and workflow customization. We conduct systematic benchmarking experiments for the practical FL scenarios and highlight potential opportunities for further advancements in FL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhuang24c, title = {{COALA}: A Practical and Vision-Centric Federated Learning Platform}, author = {Zhuang, Weiming and Xu, Jian and Chen, Chen and Li, Jingtao and Lyu, Lingjuan}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {62723--62742}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhuang24c/zhuang24c.pdf}, url = {https://proceedings.mlr.press/v235/zhuang24c.html}, abstract = {We present COALA, a vision-centric Federated Learning (FL) platform, and a suite of benchmarks for practical FL scenarios, which we categorize as task, data, and model levels. At the task level, COALA extends support from simple classification to 15 computer vision tasks, including object detection, segmentation, pose estimation, and more. It also facilitates federated multiple-task learning, allowing clients to train on multiple tasks simultaneously. At the data level, COALA goes beyond supervised FL to benchmark both semi-supervised FL and unsupervised FL. It also benchmarks feature distribution shifts other than commonly considered label distribution shifts. In addition to dealing with static data, it supports federated continual learning for continuously changing data in real-world scenarios. At the model level, COALA benchmarks FL with split models and different models in different clients. COALA platform offers three degrees of customization for these practical FL scenarios, including configuration customization, components customization, and workflow customization. We conduct systematic benchmarking experiments for the practical FL scenarios and highlight potential opportunities for further advancements in FL.} }
Endnote
%0 Conference Paper %T COALA: A Practical and Vision-Centric Federated Learning Platform %A Weiming Zhuang %A Jian Xu %A Chen Chen %A Jingtao Li %A Lingjuan Lyu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhuang24c %I PMLR %P 62723--62742 %U https://proceedings.mlr.press/v235/zhuang24c.html %V 235 %X We present COALA, a vision-centric Federated Learning (FL) platform, and a suite of benchmarks for practical FL scenarios, which we categorize as task, data, and model levels. At the task level, COALA extends support from simple classification to 15 computer vision tasks, including object detection, segmentation, pose estimation, and more. It also facilitates federated multiple-task learning, allowing clients to train on multiple tasks simultaneously. At the data level, COALA goes beyond supervised FL to benchmark both semi-supervised FL and unsupervised FL. It also benchmarks feature distribution shifts other than commonly considered label distribution shifts. In addition to dealing with static data, it supports federated continual learning for continuously changing data in real-world scenarios. At the model level, COALA benchmarks FL with split models and different models in different clients. COALA platform offers three degrees of customization for these practical FL scenarios, including configuration customization, components customization, and workflow customization. We conduct systematic benchmarking experiments for the practical FL scenarios and highlight potential opportunities for further advancements in FL.
APA
Zhuang, W., Xu, J., Chen, C., Li, J. & Lyu, L.. (2024). COALA: A Practical and Vision-Centric Federated Learning Platform. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:62723-62742 Available from https://proceedings.mlr.press/v235/zhuang24c.html.

Related Material