Makoto Yamada,
Gang Niu,
Jun Takagi,
Masashi Sugiyama
;
Proceedings of the Asian Conference on Machine Learning, PMLR 20:247-262, 2011.
Abstract
The purpose of sufficient dimension reduction (SDR) is to find a low-dimensional expression of input features that is sufficient for predicting output values. In this paper, we propose a novel distribution-free SDR method called sufficient component analysis (SCA), which is computationally more efficient than existing methods. In our method, a solution is computed by iteratively performing dependence estimation and maximization: Dependence estimation is analytically carried out by recently-proposed least-squares mutual information (LSMI), and dependence maximization is also analytically carried out by utilizing the Epanechnikov kernel. Through large-scale experiments on real-world image classification and audio tagging problems, the proposed method is shown to compare favorably with existing dimension reduction approaches.
@InProceedings{pmlr-v20-yamada11,
title = {Computationally Efficient Sufficient Dimension Reduction via Squared-Loss Mutual Information},
author = {Makoto Yamada and Gang Niu and Jun Takagi and Masashi Sugiyama},
booktitle = {Proceedings of the Asian Conference on Machine Learning},
pages = {247--262},
year = {2011},
editor = {Chun-Nan Hsu and Wee Sun Lee},
volume = {20},
series = {Proceedings of Machine Learning Research},
address = {South Garden Hotels and Resorts, Taoyuan, Taiwain},
month = {14--15 Nov},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v20/yamada11/yamada11.pdf},
url = {http://proceedings.mlr.press/v20/yamada11.html},
abstract = {The purpose of sufficient dimension reduction (SDR) is to find a low-dimensional expression of input features that is sufficient for predicting output values. In this paper, we propose a novel distribution-free SDR method called sufficient component analysis (SCA), which is computationally more efficient than existing methods. In our method, a solution is computed by iteratively performing dependence estimation and maximization: Dependence estimation is analytically carried out by recently-proposed least-squares mutual information (LSMI), and dependence maximization is also analytically carried out by utilizing the Epanechnikov kernel. Through large-scale experiments on real-world image classification and audio tagging problems, the proposed method is shown to compare favorably with existing dimension reduction approaches.}
}
%0 Conference Paper
%T Computationally Efficient Sufficient Dimension Reduction via Squared-Loss Mutual Information
%A Makoto Yamada
%A Gang Niu
%A Jun Takagi
%A Masashi Sugiyama
%B Proceedings of the Asian Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2011
%E Chun-Nan Hsu
%E Wee Sun Lee
%F pmlr-v20-yamada11
%I PMLR
%J Proceedings of Machine Learning Research
%P 247--262
%U http://proceedings.mlr.press
%V 20
%W PMLR
%X The purpose of sufficient dimension reduction (SDR) is to find a low-dimensional expression of input features that is sufficient for predicting output values. In this paper, we propose a novel distribution-free SDR method called sufficient component analysis (SCA), which is computationally more efficient than existing methods. In our method, a solution is computed by iteratively performing dependence estimation and maximization: Dependence estimation is analytically carried out by recently-proposed least-squares mutual information (LSMI), and dependence maximization is also analytically carried out by utilizing the Epanechnikov kernel. Through large-scale experiments on real-world image classification and audio tagging problems, the proposed method is shown to compare favorably with existing dimension reduction approaches.
TY - CPAPER
TI - Computationally Efficient Sufficient Dimension Reduction via Squared-Loss Mutual Information
AU - Makoto Yamada
AU - Gang Niu
AU - Jun Takagi
AU - Masashi Sugiyama
BT - Proceedings of the Asian Conference on Machine Learning
PY - 2011/11/17
DA - 2011/11/17
ED - Chun-Nan Hsu
ED - Wee Sun Lee
ID - pmlr-v20-yamada11
PB - PMLR
SP - 247
DP - PMLR
EP - 262
L1 - http://proceedings.mlr.press/v20/yamada11/yamada11.pdf
UR - http://proceedings.mlr.press/v20/yamada11.html
AB - The purpose of sufficient dimension reduction (SDR) is to find a low-dimensional expression of input features that is sufficient for predicting output values. In this paper, we propose a novel distribution-free SDR method called sufficient component analysis (SCA), which is computationally more efficient than existing methods. In our method, a solution is computed by iteratively performing dependence estimation and maximization: Dependence estimation is analytically carried out by recently-proposed least-squares mutual information (LSMI), and dependence maximization is also analytically carried out by utilizing the Epanechnikov kernel. Through large-scale experiments on real-world image classification and audio tagging problems, the proposed method is shown to compare favorably with existing dimension reduction approaches.
ER -
Yamada, M., Niu, G., Takagi, J. & Sugiyama, M.. (2011). Computationally Efficient Sufficient Dimension Reduction via Squared-Loss Mutual Information. Proceedings of the Asian Conference on Machine Learning, in PMLR 20:247-262
This site last compiled Mon, 16 Jul 2018 07:35:01 +0000