Learning via Social Awareness: Improving a Deep Generative Sketching Model with Facial Feedback
Proceedings of IJCAI 2018 2nd Workshop on Artificial Intelligence in Affective Computing, PMLR 86:1-9, 2020.
A known deﬁcit of modern machine learning (ML) and deep learning (DL) methodology is that models must be carefully ﬁne-tuned in order to solve a particular task. Most algorithms cannot generalize well to even highly similar tasks, let alone exhibit signs of artiﬁcial general intelligence (AGI). To address this problem, researchers have explored developing loss functions that act as intrinsic motivators that could drive an ML or DL agent to learn across a number of domains. This paper argues that an important and useful intrinsic motivator is that of social interaction. We posit that making an AI agent aware of implicit social feedback from humans can allow for faster learning of more generalizable and useful representations, and could potentially impact AI safety. We collect social feedback in the form of facial expression reactions to samples from Sketch RNN, an LSTM-based variational autoencoder (VAE) designed to produce sketch drawings. We use a Latent Constraints GAN (LC-GAN) to learn from the facial feedback of a small group of viewers, by optimizing the model to produce sketches that it predicts will lead to more positive facial expressions. We show in multiple independent evaluations that the model trained with facial feedback produced sketches that are more highly rated, and induce signiﬁcantly more positive facial expressions. Thus, we establish that implicit social feedback can improve the output of a deep learning model.