Communication-efficient Distributed Sparse Linear Discriminant Analysis

Lu Tian, Quanquan Gu
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1178-1187, 2017.

Abstract

We propose a communication-efficient distributed estimation method for sparse linear discriminant analysis (LDA) in the high dimensional regime. Our method distributes the data of size N into m machines, and estimates a local sparse LDA estimator on each machine using the data subset of size N/m. After the distributed estimation, our method aggregates the debiased local estimators from m machines, and sparsifies the aggregated estimator. We show that the aggregated estimator attains the same statistical rate as the centralized estimation method, as long as the number of machines m is chosen appropriately. Moreover, we prove that our method can attain the model selection consistency under a milder condition than the centralized method. Experiments on both synthetic and real datasets corroborate our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-tian17a, title = {{Communication-efficient Distributed Sparse Linear Discriminant Analysis}}, author = {Tian, Lu and Gu, Quanquan}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {1178--1187}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/tian17a/tian17a.pdf}, url = {https://proceedings.mlr.press/v54/tian17a.html}, abstract = {We propose a communication-efficient distributed estimation method for sparse linear discriminant analysis (LDA) in the high dimensional regime. Our method distributes the data of size N into m machines, and estimates a local sparse LDA estimator on each machine using the data subset of size N/m. After the distributed estimation, our method aggregates the debiased local estimators from m machines, and sparsifies the aggregated estimator. We show that the aggregated estimator attains the same statistical rate as the centralized estimation method, as long as the number of machines m is chosen appropriately. Moreover, we prove that our method can attain the model selection consistency under a milder condition than the centralized method. Experiments on both synthetic and real datasets corroborate our theory.} }
Endnote
%0 Conference Paper %T Communication-efficient Distributed Sparse Linear Discriminant Analysis %A Lu Tian %A Quanquan Gu %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-tian17a %I PMLR %P 1178--1187 %U https://proceedings.mlr.press/v54/tian17a.html %V 54 %X We propose a communication-efficient distributed estimation method for sparse linear discriminant analysis (LDA) in the high dimensional regime. Our method distributes the data of size N into m machines, and estimates a local sparse LDA estimator on each machine using the data subset of size N/m. After the distributed estimation, our method aggregates the debiased local estimators from m machines, and sparsifies the aggregated estimator. We show that the aggregated estimator attains the same statistical rate as the centralized estimation method, as long as the number of machines m is chosen appropriately. Moreover, we prove that our method can attain the model selection consistency under a milder condition than the centralized method. Experiments on both synthetic and real datasets corroborate our theory.
APA
Tian, L. & Gu, Q.. (2017). Communication-efficient Distributed Sparse Linear Discriminant Analysis. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:1178-1187 Available from https://proceedings.mlr.press/v54/tian17a.html.

Related Material