Learning Multi-channel Deep Feature Representations for Face Recognition

Xue-wen Chen, Melih Aslan, Kunlei Zhang, Thomas Huang
Proceedings of the 1st International Workshop on Feature Extraction: Modern Questions and Challenges at NIPS 2015, PMLR 44:60-71, 2015.

Abstract

Deep learning provides a natural way to obtain feature representations from data without relying on hand-crafted descriptors. In this paper, we propose to learn deep feature representations using unsupervised and supervised learning in a cascaded fashion to produce generically descriptive yet class specific features. The proposed method can take full advantage of the availability of large-scale unlabeled data and learn discriminative features (supervised) from generic features (unsupervised). It is then applied to multiple essential facial regions to obtain multi-channel deep facial representations for face recognition. The efficacy of the proposed feature representations is validated on both controlled (i.e., extended Yale- B, Yale, and AR) and uncontrolled (PubFig) benchmark face databases. Experimental results show its effectiveness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v44-chen15learning, title = {Learning Multi-channel Deep Feature Representations for Face Recognition}, author = {Chen, Xue-wen and Aslan, Melih and Zhang, Kunlei and Huang, Thomas}, booktitle = {Proceedings of the 1st International Workshop on Feature Extraction: Modern Questions and Challenges at NIPS 2015}, pages = {60--71}, year = {2015}, editor = {Storcheus, Dmitry and Rostamizadeh, Afshin and Kumar, Sanjiv}, volume = {44}, series = {Proceedings of Machine Learning Research}, address = {Montreal, Canada}, month = {11 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v44/chen15learning.pdf}, url = {https://proceedings.mlr.press/v44/chen15learning.html}, abstract = {Deep learning provides a natural way to obtain feature representations from data without relying on hand-crafted descriptors. In this paper, we propose to learn deep feature representations using unsupervised and supervised learning in a cascaded fashion to produce generically descriptive yet class specific features. The proposed method can take full advantage of the availability of large-scale unlabeled data and learn discriminative features (supervised) from generic features (unsupervised). It is then applied to multiple essential facial regions to obtain multi-channel deep facial representations for face recognition. The efficacy of the proposed feature representations is validated on both controlled (i.e., extended Yale- B, Yale, and AR) and uncontrolled (PubFig) benchmark face databases. Experimental results show its effectiveness.} }
Endnote
%0 Conference Paper %T Learning Multi-channel Deep Feature Representations for Face Recognition %A Xue-wen Chen %A Melih Aslan %A Kunlei Zhang %A Thomas Huang %B Proceedings of the 1st International Workshop on Feature Extraction: Modern Questions and Challenges at NIPS 2015 %C Proceedings of Machine Learning Research %D 2015 %E Dmitry Storcheus %E Afshin Rostamizadeh %E Sanjiv Kumar %F pmlr-v44-chen15learning %I PMLR %P 60--71 %U https://proceedings.mlr.press/v44/chen15learning.html %V 44 %X Deep learning provides a natural way to obtain feature representations from data without relying on hand-crafted descriptors. In this paper, we propose to learn deep feature representations using unsupervised and supervised learning in a cascaded fashion to produce generically descriptive yet class specific features. The proposed method can take full advantage of the availability of large-scale unlabeled data and learn discriminative features (supervised) from generic features (unsupervised). It is then applied to multiple essential facial regions to obtain multi-channel deep facial representations for face recognition. The efficacy of the proposed feature representations is validated on both controlled (i.e., extended Yale- B, Yale, and AR) and uncontrolled (PubFig) benchmark face databases. Experimental results show its effectiveness.
RIS
TY - CPAPER TI - Learning Multi-channel Deep Feature Representations for Face Recognition AU - Xue-wen Chen AU - Melih Aslan AU - Kunlei Zhang AU - Thomas Huang BT - Proceedings of the 1st International Workshop on Feature Extraction: Modern Questions and Challenges at NIPS 2015 DA - 2015/12/08 ED - Dmitry Storcheus ED - Afshin Rostamizadeh ED - Sanjiv Kumar ID - pmlr-v44-chen15learning PB - PMLR DP - Proceedings of Machine Learning Research VL - 44 SP - 60 EP - 71 L1 - http://proceedings.mlr.press/v44/chen15learning.pdf UR - https://proceedings.mlr.press/v44/chen15learning.html AB - Deep learning provides a natural way to obtain feature representations from data without relying on hand-crafted descriptors. In this paper, we propose to learn deep feature representations using unsupervised and supervised learning in a cascaded fashion to produce generically descriptive yet class specific features. The proposed method can take full advantage of the availability of large-scale unlabeled data and learn discriminative features (supervised) from generic features (unsupervised). It is then applied to multiple essential facial regions to obtain multi-channel deep facial representations for face recognition. The efficacy of the proposed feature representations is validated on both controlled (i.e., extended Yale- B, Yale, and AR) and uncontrolled (PubFig) benchmark face databases. Experimental results show its effectiveness. ER -
APA
Chen, X., Aslan, M., Zhang, K. & Huang, T.. (2015). Learning Multi-channel Deep Feature Representations for Face Recognition. Proceedings of the 1st International Workshop on Feature Extraction: Modern Questions and Challenges at NIPS 2015, in Proceedings of Machine Learning Research 44:60-71 Available from https://proceedings.mlr.press/v44/chen15learning.html.

Related Material