The Multi-Task Learning View of Multimodal Data

Hachem Kadri, Stephane Ayache, Cécile Capponi, Sokol Koço, François-Xavier Dupé, Emilie Morvant
Proceedings of the 5th Asian Conference on Machine Learning, PMLR 29:261-276, 2013.

Abstract

We study the problem of learning from multiple views using kernel methods in a supervised setting. We approach this problem from a multi-task learning point of view and illustrate how to capture the interesting multimodal structure of the data using multi-task kernels. Our analysis shows that the multi-task perspective offers the flexibility to design more efficient multiple-source learning algorithms, and hence the ability to exploit multiple descriptions of the data. In particular, we formulate the multimodal learning framework using vector-valued reproducing kernel Hilbert spaces, and we derive specific multi-task kernels that can operate over multiple modalities. Finally, we analyze the vector-valued regularized least squares algorithm in this context, and demonstrate its potential in a series of experiments with a real-world multimodal data set.

Cite this Paper


BibTeX
@InProceedings{pmlr-v29-Kadri13, title = {The Multi-Task Learning View of Multimodal Data}, author = {Kadri, Hachem and Ayache, Stephane and Capponi, Cécile and Koço, Sokol and Dupé, François-Xavier and Morvant, Emilie}, booktitle = {Proceedings of the 5th Asian Conference on Machine Learning}, pages = {261--276}, year = {2013}, editor = {Ong, Cheng Soon and Ho, Tu Bao}, volume = {29}, series = {Proceedings of Machine Learning Research}, address = {Australian National University, Canberra, Australia}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v29/Kadri13.pdf}, url = {https://proceedings.mlr.press/v29/Kadri13.html}, abstract = {We study the problem of learning from multiple views using kernel methods in a supervised setting. We approach this problem from a multi-task learning point of view and illustrate how to capture the interesting multimodal structure of the data using multi-task kernels. Our analysis shows that the multi-task perspective offers the flexibility to design more efficient multiple-source learning algorithms, and hence the ability to exploit multiple descriptions of the data. In particular, we formulate the multimodal learning framework using vector-valued reproducing kernel Hilbert spaces, and we derive specific multi-task kernels that can operate over multiple modalities. Finally, we analyze the vector-valued regularized least squares algorithm in this context, and demonstrate its potential in a series of experiments with a real-world multimodal data set.} }
Endnote
%0 Conference Paper %T The Multi-Task Learning View of Multimodal Data %A Hachem Kadri %A Stephane Ayache %A Cécile Capponi %A Sokol Koço %A François-Xavier Dupé %A Emilie Morvant %B Proceedings of the 5th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Cheng Soon Ong %E Tu Bao Ho %F pmlr-v29-Kadri13 %I PMLR %P 261--276 %U https://proceedings.mlr.press/v29/Kadri13.html %V 29 %X We study the problem of learning from multiple views using kernel methods in a supervised setting. We approach this problem from a multi-task learning point of view and illustrate how to capture the interesting multimodal structure of the data using multi-task kernels. Our analysis shows that the multi-task perspective offers the flexibility to design more efficient multiple-source learning algorithms, and hence the ability to exploit multiple descriptions of the data. In particular, we formulate the multimodal learning framework using vector-valued reproducing kernel Hilbert spaces, and we derive specific multi-task kernels that can operate over multiple modalities. Finally, we analyze the vector-valued regularized least squares algorithm in this context, and demonstrate its potential in a series of experiments with a real-world multimodal data set.
RIS
TY - CPAPER TI - The Multi-Task Learning View of Multimodal Data AU - Hachem Kadri AU - Stephane Ayache AU - Cécile Capponi AU - Sokol Koço AU - François-Xavier Dupé AU - Emilie Morvant BT - Proceedings of the 5th Asian Conference on Machine Learning DA - 2013/10/21 ED - Cheng Soon Ong ED - Tu Bao Ho ID - pmlr-v29-Kadri13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 29 SP - 261 EP - 276 L1 - http://proceedings.mlr.press/v29/Kadri13.pdf UR - https://proceedings.mlr.press/v29/Kadri13.html AB - We study the problem of learning from multiple views using kernel methods in a supervised setting. We approach this problem from a multi-task learning point of view and illustrate how to capture the interesting multimodal structure of the data using multi-task kernels. Our analysis shows that the multi-task perspective offers the flexibility to design more efficient multiple-source learning algorithms, and hence the ability to exploit multiple descriptions of the data. In particular, we formulate the multimodal learning framework using vector-valued reproducing kernel Hilbert spaces, and we derive specific multi-task kernels that can operate over multiple modalities. Finally, we analyze the vector-valued regularized least squares algorithm in this context, and demonstrate its potential in a series of experiments with a real-world multimodal data set. ER -
APA
Kadri, H., Ayache, S., Capponi, C., Koço, S., Dupé, F. & Morvant, E.. (2013). The Multi-Task Learning View of Multimodal Data. Proceedings of the 5th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 29:261-276 Available from https://proceedings.mlr.press/v29/Kadri13.html.

Related Material