DeepFaceLIFT: Interpretable Personalized Models for Automatic Estimation of Self-Reported Pain

Dianbo Liu, Peng Fengjiao, Ognjen (Oggi) Rudovic, Rosalind Picard
Proceedings of IJCAI 2017 Workshop on Artificial Intelligence in Affective Computing, PMLR 66:1-16, 2017.

Abstract

Previous research on automatic pain estimation from facial expressions has focused primarily on “one-size-fits-all” metrics (such as PSPI). In this work, we focus on directly estimating each individual’s self-reported visual-analog scale (VAS) pain metric, as this is considered the gold standard for pain measurement. The VAS pain score is highly subjective and context-dependent, and its range can vary significantly among different persons. To tackle these issues, we propose a novel two-stage personalized model, named DeepFaceLIFT,for automatic estimation of VAS.This model is based on (1) Neural Network and (2) Gaussian process regression models, and is used to personalize the estimation of self-reported pain via a set of hand-crafted personal features and multi-task learning. We show on the benchmark dataset for pain analysis (The UNBC-McMaster Shoulder Pain Expression Archive) that the proposed personalized model largely outperforms the traditional, unpersonalized models the intra-class correlation improves from a baseline performance of 19% to a personalized performance of 35% while also providing confidence in the model's estimates–in contrast to existing models for the target task. Additionally, DeepFaceLIFT automatically discovers the pain-relevant facial regions for each person, allowing for an easy interpretation of the pain-related facial cues.

Cite this Paper


BibTeX
@InProceedings{pmlr-v66-liu17a, title = {DeepFaceLIFT: Interpretable Personalized Models for Automatic Estimation of Self-Reported Pain}, author = {Liu, Dianbo and Fengjiao, Peng and Rudovic, Ognjen (Oggi) and Picard, Rosalind}, booktitle = {Proceedings of IJCAI 2017 Workshop on Artificial Intelligence in Affective Computing}, pages = {1--16}, year = {2017}, editor = {Lawrence, Neil and Reid, Mark}, volume = {66}, series = {Proceedings of Machine Learning Research}, month = {20 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v66/liu17a/liu17a.pdf}, url = {https://proceedings.mlr.press/v66/liu17a.html}, abstract = {Previous research on automatic pain estimation from facial expressions has focused primarily on “one-size-fits-all” metrics (such as PSPI). In this work, we focus on directly estimating each individual’s self-reported visual-analog scale (VAS) pain metric, as this is considered the gold standard for pain measurement. The VAS pain score is highly subjective and context-dependent, and its range can vary significantly among different persons. To tackle these issues, we propose a novel two-stage personalized model, named DeepFaceLIFT,for automatic estimation of VAS.This model is based on (1) Neural Network and (2) Gaussian process regression models, and is used to personalize the estimation of self-reported pain via a set of hand-crafted personal features and multi-task learning. We show on the benchmark dataset for pain analysis (The UNBC-McMaster Shoulder Pain Expression Archive) that the proposed personalized model largely outperforms the traditional, unpersonalized models the intra-class correlation improves from a baseline performance of 19% to a personalized performance of 35% while also providing confidence in the model's estimates–in contrast to existing models for the target task. Additionally, DeepFaceLIFT automatically discovers the pain-relevant facial regions for each person, allowing for an easy interpretation of the pain-related facial cues.} }
Endnote
%0 Conference Paper %T DeepFaceLIFT: Interpretable Personalized Models for Automatic Estimation of Self-Reported Pain %A Dianbo Liu %A Peng Fengjiao %A Ognjen (Oggi) Rudovic %A Rosalind Picard %B Proceedings of IJCAI 2017 Workshop on Artificial Intelligence in Affective Computing %C Proceedings of Machine Learning Research %D 2017 %E Neil Lawrence %E Mark Reid %F pmlr-v66-liu17a %I PMLR %P 1--16 %U https://proceedings.mlr.press/v66/liu17a.html %V 66 %X Previous research on automatic pain estimation from facial expressions has focused primarily on “one-size-fits-all” metrics (such as PSPI). In this work, we focus on directly estimating each individual’s self-reported visual-analog scale (VAS) pain metric, as this is considered the gold standard for pain measurement. The VAS pain score is highly subjective and context-dependent, and its range can vary significantly among different persons. To tackle these issues, we propose a novel two-stage personalized model, named DeepFaceLIFT,for automatic estimation of VAS.This model is based on (1) Neural Network and (2) Gaussian process regression models, and is used to personalize the estimation of self-reported pain via a set of hand-crafted personal features and multi-task learning. We show on the benchmark dataset for pain analysis (The UNBC-McMaster Shoulder Pain Expression Archive) that the proposed personalized model largely outperforms the traditional, unpersonalized models the intra-class correlation improves from a baseline performance of 19% to a personalized performance of 35% while also providing confidence in the model's estimates–in contrast to existing models for the target task. Additionally, DeepFaceLIFT automatically discovers the pain-relevant facial regions for each person, allowing for an easy interpretation of the pain-related facial cues.
APA
Liu, D., Fengjiao, P., Rudovic, O.(. & Picard, R.. (2017). DeepFaceLIFT: Interpretable Personalized Models for Automatic Estimation of Self-Reported Pain. Proceedings of IJCAI 2017 Workshop on Artificial Intelligence in Affective Computing, in Proceedings of Machine Learning Research 66:1-16 Available from https://proceedings.mlr.press/v66/liu17a.html.

Related Material