Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind

Mo Yu, Qiujing Wang, Shunchi Zhang, Yisi Sang, Kangsheng Pu, Zekai Wei, Han Wang, Liyan Xu, Jing Li, Yue Yu, Jie Zhou
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:57703-57729, 2024.

Abstract

When reading a story, humans can quickly understand new fictional characters with a few observations, mainly by drawing analogies to fictional and real people they already know. This reflects the few-shot and meta-learning essence of humans’ inference of characters’ mental states, i.e., theory-of-mind (ToM), which is largely ignored in existing research. We fill this gap with a novel NLP dataset in a realistic narrative understanding scenario, ToM-in-AMC. Our dataset consists of $\sim$1,000 parsed movie scripts, each corresponding to a few-shot character understanding task that requires models to mimic humans’ ability of fast digesting characters with a few starting scenes in a new movie. We further propose a novel ToM prompting approach designed to explicitly assess the influence of multiple ToM dimensions. It surpasses existing baseline models, underscoring the significance of modeling multiple ToM dimensions for our task. Our extensive human study verifies that humans are capable of solving our problem by inferring characters’ mental states based on their previously seen movies. In comparison, all the AI systems lag $>20%$ behind humans, highlighting a notable limitation in existing approaches’ ToM capabilities. Code and data are available at https://github.com/ShunchiZhang/ToM-in-AMC

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-yu24n, title = {Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind}, author = {Yu, Mo and Wang, Qiujing and Zhang, Shunchi and Sang, Yisi and Pu, Kangsheng and Wei, Zekai and Wang, Han and Xu, Liyan and Li, Jing and Yu, Yue and Zhou, Jie}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {57703--57729}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/yu24n/yu24n.pdf}, url = {https://proceedings.mlr.press/v235/yu24n.html}, abstract = {When reading a story, humans can quickly understand new fictional characters with a few observations, mainly by drawing analogies to fictional and real people they already know. This reflects the few-shot and meta-learning essence of humans’ inference of characters’ mental states, i.e., theory-of-mind (ToM), which is largely ignored in existing research. We fill this gap with a novel NLP dataset in a realistic narrative understanding scenario, ToM-in-AMC. Our dataset consists of $\sim$1,000 parsed movie scripts, each corresponding to a few-shot character understanding task that requires models to mimic humans’ ability of fast digesting characters with a few starting scenes in a new movie. We further propose a novel ToM prompting approach designed to explicitly assess the influence of multiple ToM dimensions. It surpasses existing baseline models, underscoring the significance of modeling multiple ToM dimensions for our task. Our extensive human study verifies that humans are capable of solving our problem by inferring characters’ mental states based on their previously seen movies. In comparison, all the AI systems lag $>20%$ behind humans, highlighting a notable limitation in existing approaches’ ToM capabilities. Code and data are available at https://github.com/ShunchiZhang/ToM-in-AMC} }
Endnote
%0 Conference Paper %T Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind %A Mo Yu %A Qiujing Wang %A Shunchi Zhang %A Yisi Sang %A Kangsheng Pu %A Zekai Wei %A Han Wang %A Liyan Xu %A Jing Li %A Yue Yu %A Jie Zhou %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-yu24n %I PMLR %P 57703--57729 %U https://proceedings.mlr.press/v235/yu24n.html %V 235 %X When reading a story, humans can quickly understand new fictional characters with a few observations, mainly by drawing analogies to fictional and real people they already know. This reflects the few-shot and meta-learning essence of humans’ inference of characters’ mental states, i.e., theory-of-mind (ToM), which is largely ignored in existing research. We fill this gap with a novel NLP dataset in a realistic narrative understanding scenario, ToM-in-AMC. Our dataset consists of $\sim$1,000 parsed movie scripts, each corresponding to a few-shot character understanding task that requires models to mimic humans’ ability of fast digesting characters with a few starting scenes in a new movie. We further propose a novel ToM prompting approach designed to explicitly assess the influence of multiple ToM dimensions. It surpasses existing baseline models, underscoring the significance of modeling multiple ToM dimensions for our task. Our extensive human study verifies that humans are capable of solving our problem by inferring characters’ mental states based on their previously seen movies. In comparison, all the AI systems lag $>20%$ behind humans, highlighting a notable limitation in existing approaches’ ToM capabilities. Code and data are available at https://github.com/ShunchiZhang/ToM-in-AMC
APA
Yu, M., Wang, Q., Zhang, S., Sang, Y., Pu, K., Wei, Z., Wang, H., Xu, L., Li, J., Yu, Y. & Zhou, J.. (2024). Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:57703-57729 Available from https://proceedings.mlr.press/v235/yu24n.html.

Related Material