Facial Composite Generation with Iterative Human Feedback

Florian Strohm, Ekta Sood, Dominike Thomas, Mihai Bâce, Andreas Bulling
Proceedings of The 1st Gaze Meets ML workshop, PMLR 210:165-183, 2023.

Abstract

We propose the first method in which human and AI collaborate to iteratively reconstruct the human’s mental image of another person’s face only from their eye gaze. Current tools for generating digital human faces involve a tedious and time-consuming manual design process. While gaze-based mental image reconstruction represents a promising alternative, previous methods still assumed prior knowledge about the target face, thereby severely limiting their practical usefulness. The key novelty of our method is a collaborative, it- erative query engine: Based on the user’s gaze behaviour in each iteration, our method predicts which images to show to the user in the next iteration. Results from two human studies (N=12 and N=22) show that our method can visually reconstruct digital faces that are more similar to the mental image, and is more usable compared to other methods. As such, our findings point at the significant potential of human-AI collaboration for recon- structing mental images, potentially also beyond faces, and of human gaze as a rich source of information and a powerful mediator in said collaboration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v210-strohm23a, title = {Facial Composite Generation with Iterative Human Feedback}, author = {Strohm, Florian and Sood, Ekta and Thomas, Dominike and B{\^a}ce, Mihai and Bulling, Andreas}, booktitle = {Proceedings of The 1st Gaze Meets ML workshop}, pages = {165--183}, year = {2023}, editor = {Lourentzou, Ismini and Wu, Joy and Kashyap, Satyananda and Karargyris, Alexandros and Celi, Leo Anthony and Kawas, Ban and Talathi, Sachin}, volume = {210}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v210/strohm23a/strohm23a.pdf}, url = {https://proceedings.mlr.press/v210/strohm23a.html}, abstract = {We propose the first method in which human and AI collaborate to iteratively reconstruct the human’s mental image of another person’s face only from their eye gaze. Current tools for generating digital human faces involve a tedious and time-consuming manual design process. While gaze-based mental image reconstruction represents a promising alternative, previous methods still assumed prior knowledge about the target face, thereby severely limiting their practical usefulness. The key novelty of our method is a collaborative, it- erative query engine: Based on the user’s gaze behaviour in each iteration, our method predicts which images to show to the user in the next iteration. Results from two human studies (N=12 and N=22) show that our method can visually reconstruct digital faces that are more similar to the mental image, and is more usable compared to other methods. As such, our findings point at the significant potential of human-AI collaboration for recon- structing mental images, potentially also beyond faces, and of human gaze as a rich source of information and a powerful mediator in said collaboration.} }
Endnote
%0 Conference Paper %T Facial Composite Generation with Iterative Human Feedback %A Florian Strohm %A Ekta Sood %A Dominike Thomas %A Mihai Bâce %A Andreas Bulling %B Proceedings of The 1st Gaze Meets ML workshop %C Proceedings of Machine Learning Research %D 2023 %E Ismini Lourentzou %E Joy Wu %E Satyananda Kashyap %E Alexandros Karargyris %E Leo Anthony Celi %E Ban Kawas %E Sachin Talathi %F pmlr-v210-strohm23a %I PMLR %P 165--183 %U https://proceedings.mlr.press/v210/strohm23a.html %V 210 %X We propose the first method in which human and AI collaborate to iteratively reconstruct the human’s mental image of another person’s face only from their eye gaze. Current tools for generating digital human faces involve a tedious and time-consuming manual design process. While gaze-based mental image reconstruction represents a promising alternative, previous methods still assumed prior knowledge about the target face, thereby severely limiting their practical usefulness. The key novelty of our method is a collaborative, it- erative query engine: Based on the user’s gaze behaviour in each iteration, our method predicts which images to show to the user in the next iteration. Results from two human studies (N=12 and N=22) show that our method can visually reconstruct digital faces that are more similar to the mental image, and is more usable compared to other methods. As such, our findings point at the significant potential of human-AI collaboration for recon- structing mental images, potentially also beyond faces, and of human gaze as a rich source of information and a powerful mediator in said collaboration.
APA
Strohm, F., Sood, E., Thomas, D., Bâce, M. & Bulling, A.. (2023). Facial Composite Generation with Iterative Human Feedback. Proceedings of The 1st Gaze Meets ML workshop, in Proceedings of Machine Learning Research 210:165-183 Available from https://proceedings.mlr.press/v210/strohm23a.html.

Related Material