[edit]
Fast Adaptation of Deep Models for Facial Action Unit Detection Using Model-Agnostic Meta-Learning
Proceedings of IJCAI 2019 3rd Workshop on Artificial Intelligence in Affective Computing, PMLR 122:9-27, 2020.
Abstract
Detecting facial action unit (AU) activations is one of the key
steps in automatic recognition of facial expressions of human
emotion and cognitive states. While there are different approaches
proposed for this task, most of these are trained only for a
specific (sub)set of AUs. As such, they cannot easily adapt to the
task of detection of new AUs which are not initially used to train
the target models. In this paper, we propose a deep learning
approach for facial AU detection that can adapt to a new AU and/or
target subject by leveraging only a few labeled samples from the new
task (either an AU or subject). We use the notion of the
model-agnostic meta-learning, originally proposed for the general
image recognition/detection tasks, to design our deep learning
models for AU detection. Specifically, each subject and/or AU is
treated as a new learning task and the model learns to adapt based
on the knowledge of the previously seen tasks. We show on two
benchmark datasets (BP4D and DISFA) for facial AU detection that the
proposed approach can easily be adapted to new tasks. By using as
few as one or five labeled examples from the target task, our
approach achieves large improvements over the baseline (non-adapted)
deep models.