Protocols for Learning Classifiers on Distributed Data

Hal Daume III, Jeff Phillips, Avishek Saha, Suresh Venkatasubramanian
; Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:282-290, 2012.

Abstract

We consider the problem of learning classifiers for labeled data that has been distributed across several nodes. Our goal is to find a single classifier, with small approximation error, across all datasets while minimizing the communication between nodes. This setting models real-world communication bottlenecks in the processing of massive distributed datasets. We present several very general sampling-based solutions as well as some two-way protocols which have a provable exponential speed-up over any one-way protocol. We focus on core problems for noiseless data distributed across two or more nodes. The techniques we introduce are reminiscent of active learning, but rather than actively probing labels, nodes actively communicate with each other, each node simultaneously learning the important data from another node.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-daume12, title = {Protocols for Learning Classifiers on Distributed Data}, author = {Hal Daume III and Jeff Phillips and Avishek Saha and Suresh Venkatasubramanian}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {282--290}, year = {2012}, editor = {Neil D. Lawrence and Mark Girolami}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/daume12/daume12.pdf}, url = {http://proceedings.mlr.press/v22/daume12.html}, abstract = {We consider the problem of learning classifiers for labeled data that has been distributed across several nodes. Our goal is to find a single classifier, with small approximation error, across all datasets while minimizing the communication between nodes. This setting models real-world communication bottlenecks in the processing of massive distributed datasets. We present several very general sampling-based solutions as well as some two-way protocols which have a provable exponential speed-up over any one-way protocol. We focus on core problems for noiseless data distributed across two or more nodes. The techniques we introduce are reminiscent of active learning, but rather than actively probing labels, nodes actively communicate with each other, each node simultaneously learning the important data from another node.} }
Endnote
%0 Conference Paper %T Protocols for Learning Classifiers on Distributed Data %A Hal Daume III %A Jeff Phillips %A Avishek Saha %A Suresh Venkatasubramanian %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-daume12 %I PMLR %J Proceedings of Machine Learning Research %P 282--290 %U http://proceedings.mlr.press %V 22 %W PMLR %X We consider the problem of learning classifiers for labeled data that has been distributed across several nodes. Our goal is to find a single classifier, with small approximation error, across all datasets while minimizing the communication between nodes. This setting models real-world communication bottlenecks in the processing of massive distributed datasets. We present several very general sampling-based solutions as well as some two-way protocols which have a provable exponential speed-up over any one-way protocol. We focus on core problems for noiseless data distributed across two or more nodes. The techniques we introduce are reminiscent of active learning, but rather than actively probing labels, nodes actively communicate with each other, each node simultaneously learning the important data from another node.
RIS
TY - CPAPER TI - Protocols for Learning Classifiers on Distributed Data AU - Hal Daume III AU - Jeff Phillips AU - Avishek Saha AU - Suresh Venkatasubramanian BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics PY - 2012/03/21 DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-daume12 PB - PMLR SP - 282 DP - PMLR EP - 290 L1 - http://proceedings.mlr.press/v22/daume12/daume12.pdf UR - http://proceedings.mlr.press/v22/daume12.html AB - We consider the problem of learning classifiers for labeled data that has been distributed across several nodes. Our goal is to find a single classifier, with small approximation error, across all datasets while minimizing the communication between nodes. This setting models real-world communication bottlenecks in the processing of massive distributed datasets. We present several very general sampling-based solutions as well as some two-way protocols which have a provable exponential speed-up over any one-way protocol. We focus on core problems for noiseless data distributed across two or more nodes. The techniques we introduce are reminiscent of active learning, but rather than actively probing labels, nodes actively communicate with each other, each node simultaneously learning the important data from another node. ER -
APA
III, H.D., Phillips, J., Saha, A. & Venkatasubramanian, S.. (2012). Protocols for Learning Classifiers on Distributed Data. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in PMLR 22:282-290

Related Material