Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience

Yu Yin, Mohsen Nabian, Sarah Ostadabbas
Proceedings of IJCAI 2018 2nd Workshop on Artificial Intelligence in Affective Computing, PMLR 86:10-26, 2020.

Abstract

Affective experience prediction using different data modalities measured from an individual such as their facial expression or physiological signals has received substantial research attention in recent years. However, most studies ignore the fact that people besides having different responses under affective stimuli, may also have different resting dynamics (embedded in both facial and physiological patterns) to begin with. In this paper, we present a multimodal approach to simultaneously analyze facial movements and several peripheral physiological signals to decode individualized affective experiences under positive and negative emotional contexts, while considering their personalized resting dynamics. We propose a person-specific recurrence network to quantify the dynamics present in the person’s facial movements and physiological data. Facial movement is represented using a robust head vs. 3D face landmark localization and tracking approach, and physiological data are processed by extracting known attributes related to the underlying affective experience. The dynamical coupling between different input modalities is then assessed through the extraction of several complex recurrent network metrics. Inference models are then trained using these metrics as features to predict individual’s affective experience in a given context, after their resting dynamics are excluded from their response. We validated our approach using a multimodal dataset consists of (i) facial videos and (ii) several peripheral physiological signals, synchronously recorded from 12 participants while watching 4 emotion-eliciting video-based stimuli. The affective experience prediction results signified that our multimodal fusion method improves the prediction accuracy up to 19% when compared to the prediction using only one or a subset of the input modalities. Furthermore, we gained prediction improvement for affective experience by considering the effect of individualized resting dynamics.

Cite this Paper


BibTeX
@InProceedings{pmlr-v86-yin20a, title = {Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience}, author = {Yin, Yu and Nabian, Mohsen and Ostadabbas, Sarah}, booktitle = {Proceedings of IJCAI 2018 2nd Workshop on Artificial Intelligence in Affective Computing}, pages = {10--26}, year = {2020}, editor = {Hsu, William and Yates, Heath}, volume = {86}, series = {Proceedings of Machine Learning Research}, month = {15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v86/yin20a/yin20a.pdf}, url = {http://proceedings.mlr.press/v86/yin20a.html}, abstract = {Affective experience prediction using different data modalities measured from an individual such as their facial expression or physiological signals has received substantial research attention in recent years. However, most studies ignore the fact that people besides having different responses under affective stimuli, may also have different resting dynamics (embedded in both facial and physiological patterns) to begin with. In this paper, we present a multimodal approach to simultaneously analyze facial movements and several peripheral physiological signals to decode individualized affective experiences under positive and negative emotional contexts, while considering their personalized resting dynamics. We propose a person-specific recurrence network to quantify the dynamics present in the person’s facial movements and physiological data. Facial movement is represented using a robust head vs. 3D face landmark localization and tracking approach, and physiological data are processed by extracting known attributes related to the underlying affective experience. The dynamical coupling between different input modalities is then assessed through the extraction of several complex recurrent network metrics. Inference models are then trained using these metrics as features to predict individual’s affective experience in a given context, after their resting dynamics are excluded from their response. We validated our approach using a multimodal dataset consists of (i) facial videos and (ii) several peripheral physiological signals, synchronously recorded from 12 participants while watching 4 emotion-eliciting video-based stimuli. The affective experience prediction results signified that our multimodal fusion method improves the prediction accuracy up to 19% when compared to the prediction using only one or a subset of the input modalities. Furthermore, we gained prediction improvement for affective experience by considering the effect of individualized resting dynamics.} }
Endnote
%0 Conference Paper %T Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience %A Yu Yin %A Mohsen Nabian %A Sarah Ostadabbas %B Proceedings of IJCAI 2018 2nd Workshop on Artificial Intelligence in Affective Computing %C Proceedings of Machine Learning Research %D 2020 %E William Hsu %E Heath Yates %F pmlr-v86-yin20a %I PMLR %P 10--26 %U http://proceedings.mlr.press/v86/yin20a.html %V 86 %X Affective experience prediction using different data modalities measured from an individual such as their facial expression or physiological signals has received substantial research attention in recent years. However, most studies ignore the fact that people besides having different responses under affective stimuli, may also have different resting dynamics (embedded in both facial and physiological patterns) to begin with. In this paper, we present a multimodal approach to simultaneously analyze facial movements and several peripheral physiological signals to decode individualized affective experiences under positive and negative emotional contexts, while considering their personalized resting dynamics. We propose a person-specific recurrence network to quantify the dynamics present in the person’s facial movements and physiological data. Facial movement is represented using a robust head vs. 3D face landmark localization and tracking approach, and physiological data are processed by extracting known attributes related to the underlying affective experience. The dynamical coupling between different input modalities is then assessed through the extraction of several complex recurrent network metrics. Inference models are then trained using these metrics as features to predict individual’s affective experience in a given context, after their resting dynamics are excluded from their response. We validated our approach using a multimodal dataset consists of (i) facial videos and (ii) several peripheral physiological signals, synchronously recorded from 12 participants while watching 4 emotion-eliciting video-based stimuli. The affective experience prediction results signified that our multimodal fusion method improves the prediction accuracy up to 19% when compared to the prediction using only one or a subset of the input modalities. Furthermore, we gained prediction improvement for affective experience by considering the effect of individualized resting dynamics.
APA
Yin, Y., Nabian, M. & Ostadabbas, S.. (2020). Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience. Proceedings of IJCAI 2018 2nd Workshop on Artificial Intelligence in Affective Computing, in Proceedings of Machine Learning Research 86:10-26 Available from http://proceedings.mlr.press/v86/yin20a.html.

Related Material