Communication-efficient Distributed Sparse Linear Discriminant Analysis

Lu Tian, Quanquan Gu
; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1178-1187, 2017.

Abstract

We propose a communication-efficient distributed estimation method for sparse linear discriminant analysis (LDA) in the high dimensional regime. Our method distributes the data of size N into m machines, and estimates a local sparse LDA estimator on each machine using the data subset of size N/m. After the distributed estimation, our method aggregates the debiased local estimators from m machines, and sparsifies the aggregated estimator. We show that the aggregated estimator attains the same statistical rate as the centralized estimation method, as long as the number of machines m is chosen appropriately. Moreover, we prove that our method can attain the model selection consistency under a milder condition than the centralized method. Experiments on both synthetic and real datasets corroborate our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-tian17a, title = {{Communication-efficient Distributed Sparse Linear Discriminant Analysis}}, author = {Lu Tian and Quanquan Gu}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {1178--1187}, year = {2017}, editor = {Aarti Singh and Jerry Zhu}, volume = {54}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/tian17a/tian17a.pdf}, url = {http://proceedings.mlr.press/v54/tian17a.html}, abstract = {We propose a communication-efficient distributed estimation method for sparse linear discriminant analysis (LDA) in the high dimensional regime. Our method distributes the data of size N into m machines, and estimates a local sparse LDA estimator on each machine using the data subset of size N/m. After the distributed estimation, our method aggregates the debiased local estimators from m machines, and sparsifies the aggregated estimator. We show that the aggregated estimator attains the same statistical rate as the centralized estimation method, as long as the number of machines m is chosen appropriately. Moreover, we prove that our method can attain the model selection consistency under a milder condition than the centralized method. Experiments on both synthetic and real datasets corroborate our theory.} }
Endnote
%0 Conference Paper %T Communication-efficient Distributed Sparse Linear Discriminant Analysis %A Lu Tian %A Quanquan Gu %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-tian17a %I PMLR %J Proceedings of Machine Learning Research %P 1178--1187 %U http://proceedings.mlr.press %V 54 %W PMLR %X We propose a communication-efficient distributed estimation method for sparse linear discriminant analysis (LDA) in the high dimensional regime. Our method distributes the data of size N into m machines, and estimates a local sparse LDA estimator on each machine using the data subset of size N/m. After the distributed estimation, our method aggregates the debiased local estimators from m machines, and sparsifies the aggregated estimator. We show that the aggregated estimator attains the same statistical rate as the centralized estimation method, as long as the number of machines m is chosen appropriately. Moreover, we prove that our method can attain the model selection consistency under a milder condition than the centralized method. Experiments on both synthetic and real datasets corroborate our theory.
APA
Tian, L. & Gu, Q.. (2017). Communication-efficient Distributed Sparse Linear Discriminant Analysis. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in PMLR 54:1178-1187

Related Material