Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data

Timothy J Castiglia, Anirban Das, Shiqiang Wang, Stacy Patterson
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:2738-2766, 2022.

Abstract

We propose Compressed Vertical Federated Learning (C-VFL) for communication-efficient training on vertically partitioned data. In C-VFL, a server and multiple parties collaboratively train a model on their respective features utilizing several local iterations and sharing compressed intermediate results periodically. Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data. We prove convergence of non-convex objectives at a rate of $O(\frac{1}{\sqrt{T}})$ when the compression error is bounded over the course of training. We provide specific requirements for convergence with common compression techniques, such as quantization and top-$k$ sparsification. Finally, we experimentally show compression can reduce communication by over $90%$ without a significant decrease in accuracy over VFL without compression.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-castiglia22a, title = {Compressed-{VFL}: Communication-Efficient Learning with Vertically Partitioned Data}, author = {Castiglia, Timothy J and Das, Anirban and Wang, Shiqiang and Patterson, Stacy}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {2738--2766}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/castiglia22a/castiglia22a.pdf}, url = {https://proceedings.mlr.press/v162/castiglia22a.html}, abstract = {We propose Compressed Vertical Federated Learning (C-VFL) for communication-efficient training on vertically partitioned data. In C-VFL, a server and multiple parties collaboratively train a model on their respective features utilizing several local iterations and sharing compressed intermediate results periodically. Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data. We prove convergence of non-convex objectives at a rate of $O(\frac{1}{\sqrt{T}})$ when the compression error is bounded over the course of training. We provide specific requirements for convergence with common compression techniques, such as quantization and top-$k$ sparsification. Finally, we experimentally show compression can reduce communication by over $90%$ without a significant decrease in accuracy over VFL without compression.} }
Endnote
%0 Conference Paper %T Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data %A Timothy J Castiglia %A Anirban Das %A Shiqiang Wang %A Stacy Patterson %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-castiglia22a %I PMLR %P 2738--2766 %U https://proceedings.mlr.press/v162/castiglia22a.html %V 162 %X We propose Compressed Vertical Federated Learning (C-VFL) for communication-efficient training on vertically partitioned data. In C-VFL, a server and multiple parties collaboratively train a model on their respective features utilizing several local iterations and sharing compressed intermediate results periodically. Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data. We prove convergence of non-convex objectives at a rate of $O(\frac{1}{\sqrt{T}})$ when the compression error is bounded over the course of training. We provide specific requirements for convergence with common compression techniques, such as quantization and top-$k$ sparsification. Finally, we experimentally show compression can reduce communication by over $90%$ without a significant decrease in accuracy over VFL without compression.
APA
Castiglia, T.J., Das, A., Wang, S. & Patterson, S.. (2022). Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:2738-2766 Available from https://proceedings.mlr.press/v162/castiglia22a.html.

Related Material