Exploiting Unrelated Tasks in Multi-Task Learning

Bernardino Romera Paredes, Andreas Argyriou, Nadia Berthouze, Massimiliano Pontil
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:951-959, 2012.

Abstract

We study the problem of learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about which tasks are unrelated can lead to sparser and more informative representations for each task, essentially screening out idiosyncrasies of the data distribution. We propose a novel method which builds on a prior multitask methodology by favoring a shared low dimensional representation within each group of tasks. In addition, we impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. We further discuss a condition which ensures convexity of the optimization problem and argue that it can be solved by alternating minimization. We present experiments on synthetic and real data, which indicate that incorporating unrelated tasks can improve significantly over standard multi-task learning methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-romera12, title = {Exploiting Unrelated Tasks in Multi-Task Learning}, author = {Paredes, Bernardino Romera and Argyriou, Andreas and Berthouze, Nadia and Pontil, Massimiliano}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {951--959}, year = {2012}, editor = {Lawrence, Neil D. and Girolami, Mark}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/romera12/romera12.pdf}, url = {https://proceedings.mlr.press/v22/romera12.html}, abstract = {We study the problem of learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about which tasks are unrelated can lead to sparser and more informative representations for each task, essentially screening out idiosyncrasies of the data distribution. We propose a novel method which builds on a prior multitask methodology by favoring a shared low dimensional representation within each group of tasks. In addition, we impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. We further discuss a condition which ensures convexity of the optimization problem and argue that it can be solved by alternating minimization. We present experiments on synthetic and real data, which indicate that incorporating unrelated tasks can improve significantly over standard multi-task learning methods.} }
Endnote
%0 Conference Paper %T Exploiting Unrelated Tasks in Multi-Task Learning %A Bernardino Romera Paredes %A Andreas Argyriou %A Nadia Berthouze %A Massimiliano Pontil %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-romera12 %I PMLR %P 951--959 %U https://proceedings.mlr.press/v22/romera12.html %V 22 %X We study the problem of learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about which tasks are unrelated can lead to sparser and more informative representations for each task, essentially screening out idiosyncrasies of the data distribution. We propose a novel method which builds on a prior multitask methodology by favoring a shared low dimensional representation within each group of tasks. In addition, we impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. We further discuss a condition which ensures convexity of the optimization problem and argue that it can be solved by alternating minimization. We present experiments on synthetic and real data, which indicate that incorporating unrelated tasks can improve significantly over standard multi-task learning methods.
RIS
TY - CPAPER TI - Exploiting Unrelated Tasks in Multi-Task Learning AU - Bernardino Romera Paredes AU - Andreas Argyriou AU - Nadia Berthouze AU - Massimiliano Pontil BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-romera12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 22 SP - 951 EP - 959 L1 - http://proceedings.mlr.press/v22/romera12/romera12.pdf UR - https://proceedings.mlr.press/v22/romera12.html AB - We study the problem of learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about which tasks are unrelated can lead to sparser and more informative representations for each task, essentially screening out idiosyncrasies of the data distribution. We propose a novel method which builds on a prior multitask methodology by favoring a shared low dimensional representation within each group of tasks. In addition, we impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. We further discuss a condition which ensures convexity of the optimization problem and argue that it can be solved by alternating minimization. We present experiments on synthetic and real data, which indicate that incorporating unrelated tasks can improve significantly over standard multi-task learning methods. ER -
APA
Paredes, B.R., Argyriou, A., Berthouze, N. & Pontil, M.. (2012). Exploiting Unrelated Tasks in Multi-Task Learning. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 22:951-959 Available from https://proceedings.mlr.press/v22/romera12.html.

Related Material