Multilinear Multitask Learning

Bernardino Romera-Paredes, Hane Aung, Nadia Bianchi-Berthouze, Massimiliano Pontil
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1444-1452, 2013.

Abstract

Many real world datasets occur or can be arranged into multi-modal structures. With such datasets, the tasks to be learnt can be referenced by multiple indices. Current multitask learning frameworks are not designed to account for the preservation of this information. We propose the use of multilinear algebra as a natural way to model such a set of related tasks. We present two learning methods; one is an adapted convex relaxation method used in the context of tensor completion. The second method is based on the Tucker decomposition and on alternating minimization. Experiments on synthetic and real data indicate that the multilinear approaches provide a significant improvement over other multitask learning methods. Overall our second approach yields the best performance in all datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-romera-paredes13, title = {Multilinear Multitask Learning}, author = {Romera-Paredes, Bernardino and Aung, Hane and Bianchi-Berthouze, Nadia and Pontil, Massimiliano}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {1444--1452}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/romera-paredes13.pdf}, url = {https://proceedings.mlr.press/v28/romera-paredes13.html}, abstract = {Many real world datasets occur or can be arranged into multi-modal structures. With such datasets, the tasks to be learnt can be referenced by multiple indices. Current multitask learning frameworks are not designed to account for the preservation of this information. We propose the use of multilinear algebra as a natural way to model such a set of related tasks. We present two learning methods; one is an adapted convex relaxation method used in the context of tensor completion. The second method is based on the Tucker decomposition and on alternating minimization. Experiments on synthetic and real data indicate that the multilinear approaches provide a significant improvement over other multitask learning methods. Overall our second approach yields the best performance in all datasets.} }
Endnote
%0 Conference Paper %T Multilinear Multitask Learning %A Bernardino Romera-Paredes %A Hane Aung %A Nadia Bianchi-Berthouze %A Massimiliano Pontil %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-romera-paredes13 %I PMLR %P 1444--1452 %U https://proceedings.mlr.press/v28/romera-paredes13.html %V 28 %N 3 %X Many real world datasets occur or can be arranged into multi-modal structures. With such datasets, the tasks to be learnt can be referenced by multiple indices. Current multitask learning frameworks are not designed to account for the preservation of this information. We propose the use of multilinear algebra as a natural way to model such a set of related tasks. We present two learning methods; one is an adapted convex relaxation method used in the context of tensor completion. The second method is based on the Tucker decomposition and on alternating minimization. Experiments on synthetic and real data indicate that the multilinear approaches provide a significant improvement over other multitask learning methods. Overall our second approach yields the best performance in all datasets.
RIS
TY - CPAPER TI - Multilinear Multitask Learning AU - Bernardino Romera-Paredes AU - Hane Aung AU - Nadia Bianchi-Berthouze AU - Massimiliano Pontil BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-romera-paredes13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 1444 EP - 1452 L1 - http://proceedings.mlr.press/v28/romera-paredes13.pdf UR - https://proceedings.mlr.press/v28/romera-paredes13.html AB - Many real world datasets occur or can be arranged into multi-modal structures. With such datasets, the tasks to be learnt can be referenced by multiple indices. Current multitask learning frameworks are not designed to account for the preservation of this information. We propose the use of multilinear algebra as a natural way to model such a set of related tasks. We present two learning methods; one is an adapted convex relaxation method used in the context of tensor completion. The second method is based on the Tucker decomposition and on alternating minimization. Experiments on synthetic and real data indicate that the multilinear approaches provide a significant improvement over other multitask learning methods. Overall our second approach yields the best performance in all datasets. ER -
APA
Romera-Paredes, B., Aung, H., Bianchi-Berthouze, N. & Pontil, M.. (2013). Multilinear Multitask Learning. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):1444-1452 Available from https://proceedings.mlr.press/v28/romera-paredes13.html.

Related Material