[edit]
Distributed Multi-Task Learning
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:751-760, 2016.
Abstract
We consider the problem of distributed multi-task learning, where each machine learns a separate, but related, task. Specifically, each machine learns a linear predictor in high-dimensional space, where all tasks share the same small support. We present a communication-efficient estimator based on the debiased lasso and show that it is comparable with the optimal centralized method.