Context-Aware Human Behaviour Forecasting in Dyadic Interactions
Understanding Social Behavior in Dyadic and Small Group Interactions, PMLR 173:88-106, 2022.
Non-verbal behaviours play an indispensable role in social interaction. People tend to use a wide range of non-verbal channels, including eye gaze, body, and facial gestures, to communicate their intentions and emotions to their interacting partners. Such social signals encourage verbal messages of the communicator can be transmitted to other interlocutors in a facile and transparent manner. On the other hand, an essential aspect of communication behaviours is the dynamic exchange of non-verbal signals among interlocutors for adapting current social norms and building a common ground. This factor suggests that data observed from the interacting partners should be considered when modeling the target individual’s behaviours. Our paper introduces a generative framework with context awareness that captures the influence of the interacting partner’s non-verbal signals on the target individual. The model consists of three components, namely, Context Encoder, Generator, and Discriminator. Context Encoder is employed to extract social signals observed from the interacting partner while Generator and Discriminator are utilized to generate and optimize the target person’s gestures. We verify the efficiency of the framework on two different dyadic interaction datasets. The experimental results demonstrate that compared to baselines, our solution can produce human-like gestures better supporting interaction contexts. Undoubtedly, in dyadic interaction, the influence of the interacting partner’s social signals on the target individual is observable, and the proposed approach can efficiently capture those effects. The source code of our framework can be found at https://github.com/sairlab/Context-Aware-Human-Behavior-Forecasting.