Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:1407-1415, 2012.
Abstract
Given limited training samples, learning to classify multiple labels is challenging. Problem decomposition is widely used in this case, where the original problem is decomposed into a set of easier-to-learn subproblems, and predictions from subproblems are combined to make the final decision. In this paper we show the connection between composite likelihoods and many multi-label decomposition methods, e.g., one-vs-all, one-vs-one, calibrated label ranking, probabilistic classifier chain. This connection holds promise for improving problem decomposition in both the choice of subproblems and the combination of subproblem decisions. As an attempt to exploit this connection, we design a composite marginal method that improves pairwise decomposition. Pairwise label comparisons, which seem to be a natural choice for subproblems, are replaced by bivariate label densities, which are more informative and natural components in a composite likelihood. For combining subproblem decisions, we propose a new mean-field approximation that minimizes the notion of composite divergence and is potentially more robust to inaccurate estimations in subproblems. Empirical studies on five data sets show that, given limited training samples, the proposed method outperforms many alternatives.
@InProceedings{pmlr-v22-zhang12b,
title = {A Composite Likelihood View for Multi-Label Classification},
author = {Yi Zhang and Jeff Schneider},
booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics},
pages = {1407--1415},
year = {2012},
editor = {Neil D. Lawrence and Mark Girolami},
volume = {22},
series = {Proceedings of Machine Learning Research},
address = {La Palma, Canary Islands},
month = {21--23 Apr},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v22/zhang12b/zhang12b.pdf},
url = {http://proceedings.mlr.press/v22/zhang12b.html},
abstract = {Given limited training samples, learning to classify multiple labels is challenging. Problem decomposition is widely used in this case, where the original problem is decomposed into a set of easier-to-learn subproblems, and predictions from subproblems are combined to make the final decision. In this paper we show the connection between composite likelihoods and many multi-label decomposition methods, e.g., one-vs-all, one-vs-one, calibrated label ranking, probabilistic classifier chain. This connection holds promise for improving problem decomposition in both the choice of subproblems and the combination of subproblem decisions. As an attempt to exploit this connection, we design a composite marginal method that improves pairwise decomposition. Pairwise label comparisons, which seem to be a natural choice for subproblems, are replaced by bivariate label densities, which are more informative and natural components in a composite likelihood. For combining subproblem decisions, we propose a new mean-field approximation that minimizes the notion of composite divergence and is potentially more robust to inaccurate estimations in subproblems. Empirical studies on five data sets show that, given limited training samples, the proposed method outperforms many alternatives.}
}
%0 Conference Paper
%T A Composite Likelihood View for Multi-Label Classification
%A Yi Zhang
%A Jeff Schneider
%B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics
%C Proceedings of Machine Learning Research
%D 2012
%E Neil D. Lawrence
%E Mark Girolami
%F pmlr-v22-zhang12b
%I PMLR
%J Proceedings of Machine Learning Research
%P 1407--1415
%U http://proceedings.mlr.press
%V 22
%W PMLR
%X Given limited training samples, learning to classify multiple labels is challenging. Problem decomposition is widely used in this case, where the original problem is decomposed into a set of easier-to-learn subproblems, and predictions from subproblems are combined to make the final decision. In this paper we show the connection between composite likelihoods and many multi-label decomposition methods, e.g., one-vs-all, one-vs-one, calibrated label ranking, probabilistic classifier chain. This connection holds promise for improving problem decomposition in both the choice of subproblems and the combination of subproblem decisions. As an attempt to exploit this connection, we design a composite marginal method that improves pairwise decomposition. Pairwise label comparisons, which seem to be a natural choice for subproblems, are replaced by bivariate label densities, which are more informative and natural components in a composite likelihood. For combining subproblem decisions, we propose a new mean-field approximation that minimizes the notion of composite divergence and is potentially more robust to inaccurate estimations in subproblems. Empirical studies on five data sets show that, given limited training samples, the proposed method outperforms many alternatives.
TY - CPAPER
TI - A Composite Likelihood View for Multi-Label Classification
AU - Yi Zhang
AU - Jeff Schneider
BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics
PY - 2012/03/21
DA - 2012/03/21
ED - Neil D. Lawrence
ED - Mark Girolami
ID - pmlr-v22-zhang12b
PB - PMLR
SP - 1407
DP - PMLR
EP - 1415
L1 - http://proceedings.mlr.press/v22/zhang12b/zhang12b.pdf
UR - http://proceedings.mlr.press/v22/zhang12b.html
AB - Given limited training samples, learning to classify multiple labels is challenging. Problem decomposition is widely used in this case, where the original problem is decomposed into a set of easier-to-learn subproblems, and predictions from subproblems are combined to make the final decision. In this paper we show the connection between composite likelihoods and many multi-label decomposition methods, e.g., one-vs-all, one-vs-one, calibrated label ranking, probabilistic classifier chain. This connection holds promise for improving problem decomposition in both the choice of subproblems and the combination of subproblem decisions. As an attempt to exploit this connection, we design a composite marginal method that improves pairwise decomposition. Pairwise label comparisons, which seem to be a natural choice for subproblems, are replaced by bivariate label densities, which are more informative and natural components in a composite likelihood. For combining subproblem decisions, we propose a new mean-field approximation that minimizes the notion of composite divergence and is potentially more robust to inaccurate estimations in subproblems. Empirical studies on five data sets show that, given limited training samples, the proposed method outperforms many alternatives.
ER -
Zhang, Y. & Schneider, J.. (2012). A Composite Likelihood View for Multi-Label Classification. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in PMLR 22:1407-1415
This site last compiled Wed, 03 Jan 2018 13:53:01 +0000