Unsupervised and Transfer Learning Challenge: a Deep Learning Approach

Grégoire Mesnil Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian Goodfellow, Erick Lavoie, Xavier Muller, Guillaume Desjardins, David Warde-Farley, Pascal Vincent, Aaron Courville, James Bergstra
; Proceedings of ICML Workshop on Unsupervised and Transfer Learning, JMLR Workshop and Conference Proceedings 27:97-110, 2012.

Abstract

Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe different kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the final phase of the challenge. It combined and stacked different one-layer unsupervised learning algorithms, adapted to each of the five datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples (1 to 64 per class).

Cite this Paper


BibTeX
@InProceedings{pmlr-v27-mesnil12a, title = {Unsupervised and Transfer Learning Challenge: a Deep Learning Approach}, author = {Grégoire Mesnil Yann Dauphin and Xavier Glorot and Salah Rifai and Yoshua Bengio and Ian Goodfellow and Erick Lavoie and Xavier Muller and Guillaume Desjardins and David Warde-Farley and Pascal Vincent and Aaron Courville and James Bergstra}, pages = {97--110}, year = {2012}, editor = {Isabelle Guyon and Gideon Dror and Vincent Lemaire and Graham Taylor and Daniel Silver}, volume = {27}, series = {Proceedings of Machine Learning Research}, address = {Bellevue, Washington, USA}, month = {02 Jul}, publisher = {JMLR Workshop and Conference Proceedings}, pdf = {http://proceedings.mlr.press/v27/mesnil12a/mesnil12a.pdf}, url = {http://proceedings.mlr.press/v27/mesnil12a.html}, abstract = {Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe different kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the final phase of the challenge. It combined and stacked different one-layer unsupervised learning algorithms, adapted to each of the five datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples (1 to 64 per class).} }
Endnote
%0 Conference Paper %T Unsupervised and Transfer Learning Challenge: a Deep Learning Approach %A Grégoire Mesnil Yann Dauphin %A Xavier Glorot %A Salah Rifai %A Yoshua Bengio %A Ian Goodfellow %A Erick Lavoie %A Xavier Muller %A Guillaume Desjardins %A David Warde-Farley %A Pascal Vincent %A Aaron Courville %A James Bergstra %B Proceedings of ICML Workshop on Unsupervised and Transfer Learning %C Proceedings of Machine Learning Research %D 2012 %E Isabelle Guyon %E Gideon Dror %E Vincent Lemaire %E Graham Taylor %E Daniel Silver %F pmlr-v27-mesnil12a %I PMLR %J Proceedings of Machine Learning Research %P 97--110 %U http://proceedings.mlr.press %V 27 %W PMLR %X Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe different kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the final phase of the challenge. It combined and stacked different one-layer unsupervised learning algorithms, adapted to each of the five datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples (1 to 64 per class).
RIS
TY - CPAPER TI - Unsupervised and Transfer Learning Challenge: a Deep Learning Approach AU - Grégoire Mesnil Yann Dauphin AU - Xavier Glorot AU - Salah Rifai AU - Yoshua Bengio AU - Ian Goodfellow AU - Erick Lavoie AU - Xavier Muller AU - Guillaume Desjardins AU - David Warde-Farley AU - Pascal Vincent AU - Aaron Courville AU - James Bergstra BT - Proceedings of ICML Workshop on Unsupervised and Transfer Learning PY - 2012/06/27 DA - 2012/06/27 ED - Isabelle Guyon ED - Gideon Dror ED - Vincent Lemaire ED - Graham Taylor ED - Daniel Silver ID - pmlr-v27-mesnil12a PB - PMLR SP - 97 DP - PMLR EP - 110 L1 - http://proceedings.mlr.press/v27/mesnil12a/mesnil12a.pdf UR - http://proceedings.mlr.press/v27/mesnil12a.html AB - Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe different kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the final phase of the challenge. It combined and stacked different one-layer unsupervised learning algorithms, adapted to each of the five datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples (1 to 64 per class). ER -
APA
Dauphin, G.M.Y., Glorot, X., Rifai, S., Bengio, Y., Goodfellow, I., Lavoie, E., Muller, X., Desjardins, G., Warde-Farley, D., Vincent, P., Courville, A. & Bergstra, J.. (2012). Unsupervised and Transfer Learning Challenge: a Deep Learning Approach. Proceedings of ICML Workshop on Unsupervised and Transfer Learning, in PMLR 27:97-110

Related Material