Unsupervised and Transfer Learning Challenge: a Deep Learning Approach

Grégoire Mesnil, Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian Goodfellow, Erick Lavoie, Xavier Muller, Guillaume Desjardins, David Warde-Farley, Pascal Vincent, Aaron Courville, James Bergstra
Proceedings of ICML Workshop on Unsupervised and Transfer Learning, PMLR 27:97-110, 2012.

Abstract

Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe different kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the final phase of the challenge. It combined and stacked different one-layer unsupervised learning algorithms, adapted to each of the five datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples (1 to 64 per class).

Cite this Paper


BibTeX
@InProceedings{pmlr-v27-mesnil12a, title = {Unsupervised and Transfer Learning Challenge: a Deep Learning Approach}, author = {Mesnil, Grégoire and Dauphin, Yann and Glorot, Xavier and Rifai, Salah and Bengio, Yoshua and Goodfellow, Ian and Lavoie, Erick and Muller, Xavier and Desjardins, Guillaume and Warde-Farley, David and Vincent, Pascal and Courville, Aaron and Bergstra, James}, booktitle = {Proceedings of ICML Workshop on Unsupervised and Transfer Learning}, pages = {97--110}, year = {2012}, editor = {Guyon, Isabelle and Dror, Gideon and Lemaire, Vincent and Taylor, Graham and Silver, Daniel}, volume = {27}, series = {Proceedings of Machine Learning Research}, address = {Bellevue, Washington, USA}, month = {02 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v27/mesnil12a/mesnil12a.pdf}, url = {https://proceedings.mlr.press/v27/mesnil12a.html}, abstract = {Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe different kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the final phase of the challenge. It combined and stacked different one-layer unsupervised learning algorithms, adapted to each of the five datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples (1 to 64 per class).} }
Endnote
%0 Conference Paper %T Unsupervised and Transfer Learning Challenge: a Deep Learning Approach %A Grégoire Mesnil %A Yann Dauphin %A Xavier Glorot %A Salah Rifai %A Yoshua Bengio %A Ian Goodfellow %A Erick Lavoie %A Xavier Muller %A Guillaume Desjardins %A David Warde-Farley %A Pascal Vincent %A Aaron Courville %A James Bergstra %B Proceedings of ICML Workshop on Unsupervised and Transfer Learning %C Proceedings of Machine Learning Research %D 2012 %E Isabelle Guyon %E Gideon Dror %E Vincent Lemaire %E Graham Taylor %E Daniel Silver %F pmlr-v27-mesnil12a %I PMLR %P 97--110 %U https://proceedings.mlr.press/v27/mesnil12a.html %V 27 %X Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe different kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the final phase of the challenge. It combined and stacked different one-layer unsupervised learning algorithms, adapted to each of the five datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples (1 to 64 per class).
RIS
TY - CPAPER TI - Unsupervised and Transfer Learning Challenge: a Deep Learning Approach AU - Grégoire Mesnil AU - Yann Dauphin AU - Xavier Glorot AU - Salah Rifai AU - Yoshua Bengio AU - Ian Goodfellow AU - Erick Lavoie AU - Xavier Muller AU - Guillaume Desjardins AU - David Warde-Farley AU - Pascal Vincent AU - Aaron Courville AU - James Bergstra BT - Proceedings of ICML Workshop on Unsupervised and Transfer Learning DA - 2012/06/27 ED - Isabelle Guyon ED - Gideon Dror ED - Vincent Lemaire ED - Graham Taylor ED - Daniel Silver ID - pmlr-v27-mesnil12a PB - PMLR DP - Proceedings of Machine Learning Research VL - 27 SP - 97 EP - 110 L1 - http://proceedings.mlr.press/v27/mesnil12a/mesnil12a.pdf UR - https://proceedings.mlr.press/v27/mesnil12a.html AB - Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe different kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the final phase of the challenge. It combined and stacked different one-layer unsupervised learning algorithms, adapted to each of the five datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples (1 to 64 per class). ER -
APA
Mesnil, G., Dauphin, Y., Glorot, X., Rifai, S., Bengio, Y., Goodfellow, I., Lavoie, E., Muller, X., Desjardins, G., Warde-Farley, D., Vincent, P., Courville, A. & Bergstra, J.. (2012). Unsupervised and Transfer Learning Challenge: a Deep Learning Approach. Proceedings of ICML Workshop on Unsupervised and Transfer Learning, in Proceedings of Machine Learning Research 27:97-110 Available from https://proceedings.mlr.press/v27/mesnil12a.html.

Related Material