Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):423-431, 2013.
Abstract
In this paper, we study the multi-task learning problem with a new perspective of considering the structure of the residue error matrix and the low-rank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior and define a Gaussian Matrix Generalized Inverse Gaussian (GMGIG) model for low-rank approximation to the task covariance matrix. Through combining the GMGIG model with the residual error structure assumption, we propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution. Experiments show that this model is superior to the peer methods in regression and prediction.
@InProceedings{pmlr-v28-yang13d,
title = {Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model},
author = {Ming Yang and Yingming Li and Zhongfei Zhang},
booktitle = {Proceedings of the 30th International Conference on Machine Learning},
pages = {423--431},
year = {2013},
editor = {Sanjoy Dasgupta and David McAllester},
volume = {28},
number = {3},
series = {Proceedings of Machine Learning Research},
address = {Atlanta, Georgia, USA},
month = {17--19 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v28/yang13d.pdf},
url = {http://proceedings.mlr.press/v28/yang13d.html},
abstract = {In this paper, we study the multi-task learning problem with a new perspective of considering the structure of the residue error matrix and the low-rank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior and define a Gaussian Matrix Generalized Inverse Gaussian (GMGIG) model for low-rank approximation to the task covariance matrix. Through combining the GMGIG model with the residual error structure assumption, we propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution. Experiments show that this model is superior to the peer methods in regression and prediction.}
}
%0 Conference Paper
%T Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model
%A Ming Yang
%A Yingming Li
%A Zhongfei Zhang
%B Proceedings of the 30th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2013
%E Sanjoy Dasgupta
%E David McAllester
%F pmlr-v28-yang13d
%I PMLR
%J Proceedings of Machine Learning Research
%P 423--431
%U http://proceedings.mlr.press
%V 28
%N 3
%W PMLR
%X In this paper, we study the multi-task learning problem with a new perspective of considering the structure of the residue error matrix and the low-rank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior and define a Gaussian Matrix Generalized Inverse Gaussian (GMGIG) model for low-rank approximation to the task covariance matrix. Through combining the GMGIG model with the residual error structure assumption, we propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution. Experiments show that this model is superior to the peer methods in regression and prediction.
TY - CPAPER
TI - Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model
AU - Ming Yang
AU - Yingming Li
AU - Zhongfei Zhang
BT - Proceedings of the 30th International Conference on Machine Learning
PY - 2013/02/13
DA - 2013/02/13
ED - Sanjoy Dasgupta
ED - David McAllester
ID - pmlr-v28-yang13d
PB - PMLR
SP - 423
DP - PMLR
EP - 431
L1 - http://proceedings.mlr.press/v28/yang13d.pdf
UR - http://proceedings.mlr.press/v28/yang13d.html
AB - In this paper, we study the multi-task learning problem with a new perspective of considering the structure of the residue error matrix and the low-rank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior and define a Gaussian Matrix Generalized Inverse Gaussian (GMGIG) model for low-rank approximation to the task covariance matrix. Through combining the GMGIG model with the residual error structure assumption, we propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution. Experiments show that this model is superior to the peer methods in regression and prediction.
ER -
Yang, M., Li, Y. & Zhang, Z.. (2013). Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(3):423-431
This site last compiled Mon, 16 Jul 2018 07:38:13 +0000