Learning Discriminative Features using Center Loss and Reconstruction as Regularizer for Speech Emotion Recognition

Suraj Tripathi, Abhiram Ramesh, Abhay Kumar, Chirag Singh, Promod Yenigalla
Proceedings of IJCAI 2019 3rd Workshop on Artificial Intelligence in Affective Computing, PMLR 122:44-53, 2020.

Abstract

This paper proposes a Convolutional Neural Network (CNN) inspired by Multitask Learning (MTL) and based on speech features trained under the joint supervision of softmax loss and center loss, a powerful metric learning strategy, for the recognition of emotion in speech. Speech features such as Spectrograms and Mel-frequency Cepstral Coefficients (MFCCs) help retain emotion related low-level characteristics in speech. We experimented with several Deep Neural Network (DNN) architectures that take in speech features as input and trained them under both softmax and center loss, which resulted in highly discriminative features ideal for Speech Emotion Recognition (SER). Our networks also employ a regularizing effect by simultaneously performing the auxiliary task of reconstructing the input speech features. This sharing of representations among related tasks enables our network to better generalize the original task of SER. Some of our proposed networks contain far fewer parameters when compared to state-of-the-art architectures. We used the University of Southern California’s Interactive Emotional Motion Capture (USC-IEMOCAP) database in this work. Our best performing model achieves a 3.1% improvement in overall accuracy and a 5.3% improvement in class accuracy when compared to existing state-of-the-art methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v122-tripathi20a, title = {Learning Discriminative Features using Center Loss and Reconstruction as Regularizer for Speech Emotion Recognition}, author = {Tripathi, Suraj and Ramesh, Abhiram and Kumar, Abhay and Singh, Chirag and Yenigalla, Promod}, booktitle = {Proceedings of IJCAI 2019 3rd Workshop on Artificial Intelligence in Affective Computing}, pages = {44--53}, year = {2020}, editor = {Hsu, William}, volume = {122}, series = {Proceedings of Machine Learning Research}, month = {10 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v122/tripathi20a/tripathi20a.pdf}, url = {https://proceedings.mlr.press/v122/tripathi20a.html}, abstract = {This paper proposes a Convolutional Neural Network (CNN) inspired by Multitask Learning (MTL) and based on speech features trained under the joint supervision of softmax loss and center loss, a powerful metric learning strategy, for the recognition of emotion in speech. Speech features such as Spectrograms and Mel-frequency Cepstral Coefficients (MFCCs) help retain emotion related low-level characteristics in speech. We experimented with several Deep Neural Network (DNN) architectures that take in speech features as input and trained them under both softmax and center loss, which resulted in highly discriminative features ideal for Speech Emotion Recognition (SER). Our networks also employ a regularizing effect by simultaneously performing the auxiliary task of reconstructing the input speech features. This sharing of representations among related tasks enables our network to better generalize the original task of SER. Some of our proposed networks contain far fewer parameters when compared to state-of-the-art architectures. We used the University of Southern California’s Interactive Emotional Motion Capture (USC-IEMOCAP) database in this work. Our best performing model achieves a 3.1% improvement in overall accuracy and a 5.3% improvement in class accuracy when compared to existing state-of-the-art methods.} }
Endnote
%0 Conference Paper %T Learning Discriminative Features using Center Loss and Reconstruction as Regularizer for Speech Emotion Recognition %A Suraj Tripathi %A Abhiram Ramesh %A Abhay Kumar %A Chirag Singh %A Promod Yenigalla %B Proceedings of IJCAI 2019 3rd Workshop on Artificial Intelligence in Affective Computing %C Proceedings of Machine Learning Research %D 2020 %E William Hsu %F pmlr-v122-tripathi20a %I PMLR %P 44--53 %U https://proceedings.mlr.press/v122/tripathi20a.html %V 122 %X This paper proposes a Convolutional Neural Network (CNN) inspired by Multitask Learning (MTL) and based on speech features trained under the joint supervision of softmax loss and center loss, a powerful metric learning strategy, for the recognition of emotion in speech. Speech features such as Spectrograms and Mel-frequency Cepstral Coefficients (MFCCs) help retain emotion related low-level characteristics in speech. We experimented with several Deep Neural Network (DNN) architectures that take in speech features as input and trained them under both softmax and center loss, which resulted in highly discriminative features ideal for Speech Emotion Recognition (SER). Our networks also employ a regularizing effect by simultaneously performing the auxiliary task of reconstructing the input speech features. This sharing of representations among related tasks enables our network to better generalize the original task of SER. Some of our proposed networks contain far fewer parameters when compared to state-of-the-art architectures. We used the University of Southern California’s Interactive Emotional Motion Capture (USC-IEMOCAP) database in this work. Our best performing model achieves a 3.1% improvement in overall accuracy and a 5.3% improvement in class accuracy when compared to existing state-of-the-art methods.
APA
Tripathi, S., Ramesh, A., Kumar, A., Singh, C. & Yenigalla, P.. (2020). Learning Discriminative Features using Center Loss and Reconstruction as Regularizer for Speech Emotion Recognition. Proceedings of IJCAI 2019 3rd Workshop on Artificial Intelligence in Affective Computing, in Proceedings of Machine Learning Research 122:44-53 Available from https://proceedings.mlr.press/v122/tripathi20a.html.

Related Material