Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model

Ming Yang, Yingming Li, Zhongfei Zhang
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):423-431, 2013.

Abstract

In this paper, we study the multi-task learning problem with a new perspective of considering the structure of the residue error matrix and the low-rank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior and define a Gaussian Matrix Generalized Inverse Gaussian (GMGIG) model for low-rank approximation to the task covariance matrix. Through combining the GMGIG model with the residual error structure assumption, we propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution. Experiments show that this model is superior to the peer methods in regression and prediction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-yang13d, title = {Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model}, author = {Yang, Ming and Li, Yingming and Zhang, Zhongfei}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {423--431}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/yang13d.pdf}, url = {https://proceedings.mlr.press/v28/yang13d.html}, abstract = {In this paper, we study the multi-task learning problem with a new perspective of considering the structure of the residue error matrix and the low-rank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior and define a Gaussian Matrix Generalized Inverse Gaussian (GMGIG) model for low-rank approximation to the task covariance matrix. Through combining the GMGIG model with the residual error structure assumption, we propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution. Experiments show that this model is superior to the peer methods in regression and prediction.} }
Endnote
%0 Conference Paper %T Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model %A Ming Yang %A Yingming Li %A Zhongfei Zhang %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-yang13d %I PMLR %P 423--431 %U https://proceedings.mlr.press/v28/yang13d.html %V 28 %N 3 %X In this paper, we study the multi-task learning problem with a new perspective of considering the structure of the residue error matrix and the low-rank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior and define a Gaussian Matrix Generalized Inverse Gaussian (GMGIG) model for low-rank approximation to the task covariance matrix. Through combining the GMGIG model with the residual error structure assumption, we propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution. Experiments show that this model is superior to the peer methods in regression and prediction.
RIS
TY - CPAPER TI - Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model AU - Ming Yang AU - Yingming Li AU - Zhongfei Zhang BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-yang13d PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 423 EP - 431 L1 - http://proceedings.mlr.press/v28/yang13d.pdf UR - https://proceedings.mlr.press/v28/yang13d.html AB - In this paper, we study the multi-task learning problem with a new perspective of considering the structure of the residue error matrix and the low-rank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior and define a Gaussian Matrix Generalized Inverse Gaussian (GMGIG) model for low-rank approximation to the task covariance matrix. Through combining the GMGIG model with the residual error structure assumption, we propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution. Experiments show that this model is superior to the peer methods in regression and prediction. ER -
APA
Yang, M., Li, Y. & Zhang, Z.. (2013). Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):423-431 Available from https://proceedings.mlr.press/v28/yang13d.html.

Related Material