Unsupervised dimensionality reduction via gradient-based matrix factorization with two adaptive learning rates

Vladimir Nikulin, Tian-Hsiang Huang
Proceedings of ICML Workshop on Unsupervised and Transfer Learning, PMLR 27:181-194, 2012.

Abstract

The high dimensionality of the data, the expressions of thousands of features in a much smaller number of samples, presents challenges that affect applicability of the analytical results. In principle, it would be better to describe the data in terms of a small number of meta-features, derived as a result of matrix factorization, which could reduce noise while still capturing the essential features of the data. Three novel and mutually relevant methods are presented in this paper: 1) gradient-based matrix factorization with two adaptive learning rates (in accordance with the number of factor matrices) and their automatic updates; 2) nonparametric criterion for the selection of the number of factors; and 3) nonnegative version of the gradient-based matrix factorization which doesn't require any extra computational costs in difference to the existing methods. We demonstrate effectiveness of the proposed methods to the supervised classification of gene expression data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v27-nikulin12a, title = {Unsupervised dimensionality reduction via gradient-based matrix factorization with two adaptive learning rates}, author = {Nikulin, Vladimir and Huang, Tian-Hsiang}, booktitle = {Proceedings of ICML Workshop on Unsupervised and Transfer Learning}, pages = {181--194}, year = {2012}, editor = {Guyon, Isabelle and Dror, Gideon and Lemaire, Vincent and Taylor, Graham and Silver, Daniel}, volume = {27}, series = {Proceedings of Machine Learning Research}, address = {Bellevue, Washington, USA}, month = {02 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v27/nikulin12a/nikulin12a.pdf}, url = {https://proceedings.mlr.press/v27/nikulin12a.html}, abstract = {The high dimensionality of the data, the expressions of thousands of features in a much smaller number of samples, presents challenges that affect applicability of the analytical results. In principle, it would be better to describe the data in terms of a small number of meta-features, derived as a result of matrix factorization, which could reduce noise while still capturing the essential features of the data. Three novel and mutually relevant methods are presented in this paper: 1) gradient-based matrix factorization with two adaptive learning rates (in accordance with the number of factor matrices) and their automatic updates; 2) nonparametric criterion for the selection of the number of factors; and 3) nonnegative version of the gradient-based matrix factorization which doesn't require any extra computational costs in difference to the existing methods. We demonstrate effectiveness of the proposed methods to the supervised classification of gene expression data.} }
Endnote
%0 Conference Paper %T Unsupervised dimensionality reduction via gradient-based matrix factorization with two adaptive learning rates %A Vladimir Nikulin %A Tian-Hsiang Huang %B Proceedings of ICML Workshop on Unsupervised and Transfer Learning %C Proceedings of Machine Learning Research %D 2012 %E Isabelle Guyon %E Gideon Dror %E Vincent Lemaire %E Graham Taylor %E Daniel Silver %F pmlr-v27-nikulin12a %I PMLR %P 181--194 %U https://proceedings.mlr.press/v27/nikulin12a.html %V 27 %X The high dimensionality of the data, the expressions of thousands of features in a much smaller number of samples, presents challenges that affect applicability of the analytical results. In principle, it would be better to describe the data in terms of a small number of meta-features, derived as a result of matrix factorization, which could reduce noise while still capturing the essential features of the data. Three novel and mutually relevant methods are presented in this paper: 1) gradient-based matrix factorization with two adaptive learning rates (in accordance with the number of factor matrices) and their automatic updates; 2) nonparametric criterion for the selection of the number of factors; and 3) nonnegative version of the gradient-based matrix factorization which doesn't require any extra computational costs in difference to the existing methods. We demonstrate effectiveness of the proposed methods to the supervised classification of gene expression data.
RIS
TY - CPAPER TI - Unsupervised dimensionality reduction via gradient-based matrix factorization with two adaptive learning rates AU - Vladimir Nikulin AU - Tian-Hsiang Huang BT - Proceedings of ICML Workshop on Unsupervised and Transfer Learning DA - 2012/06/27 ED - Isabelle Guyon ED - Gideon Dror ED - Vincent Lemaire ED - Graham Taylor ED - Daniel Silver ID - pmlr-v27-nikulin12a PB - PMLR DP - Proceedings of Machine Learning Research VL - 27 SP - 181 EP - 194 L1 - http://proceedings.mlr.press/v27/nikulin12a/nikulin12a.pdf UR - https://proceedings.mlr.press/v27/nikulin12a.html AB - The high dimensionality of the data, the expressions of thousands of features in a much smaller number of samples, presents challenges that affect applicability of the analytical results. In principle, it would be better to describe the data in terms of a small number of meta-features, derived as a result of matrix factorization, which could reduce noise while still capturing the essential features of the data. Three novel and mutually relevant methods are presented in this paper: 1) gradient-based matrix factorization with two adaptive learning rates (in accordance with the number of factor matrices) and their automatic updates; 2) nonparametric criterion for the selection of the number of factors; and 3) nonnegative version of the gradient-based matrix factorization which doesn't require any extra computational costs in difference to the existing methods. We demonstrate effectiveness of the proposed methods to the supervised classification of gene expression data. ER -
APA
Nikulin, V. & Huang, T.. (2012). Unsupervised dimensionality reduction via gradient-based matrix factorization with two adaptive learning rates. Proceedings of ICML Workshop on Unsupervised and Transfer Learning, in Proceedings of Machine Learning Research 27:181-194 Available from https://proceedings.mlr.press/v27/nikulin12a.html.

Related Material