Neurosymbolic Tag-Based Annotation for Interpretable Avatar Creation

Minghao Liu, Zeyu Cheng, Shen Sang, Jing Liu, James Davis
Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, PMLR 284:589-624, 2025.

Abstract

Avatar creation from human images presents challenges for direct neural approaches, which suffer from inconsistent predictions and poor interpretability due to the large parameter space with hundreds of ambiguous options. We propose a neurosymbolic tag-based annotation method that combines neural perceptual learning with symbolic semantic reasoning. Instead of directly predicting avatar parameters, our approach uses a neural network to predict semantic tags (hair length, curliness, direction) as an intermediate symbolic representation, then applies symbolic search algorithms to match optimal avatar assets. This neurosymbolic design produces higher annotator agreements (96.7% vs 31.0% for direct annotation), enables more consistent model predictions, and provides interpretable avatar selection with ranked alternatives. The tag-based system generalizes easily across rendering systems, requiring only new asset annotation while reusing human image tags. Experimental results demonstrate superior convergence, consistency, and visual quality compared to direct prediction methods, showing how neurosymbolic approaches can improve trustworthiness and interpretability in creative AI applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v284-liu25a, title = {Neurosymbolic Tag-Based Annotation for Interpretable Avatar Creation}, author = {Liu, Minghao and Cheng, Zeyu and Sang, Shen and Liu, Jing and Davis, James}, booktitle = {Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning}, pages = {589--624}, year = {2025}, editor = {H. Gilpin, Leilani and Giunchiglia, Eleonora and Hitzler, Pascal and van Krieken, Emile}, volume = {284}, series = {Proceedings of Machine Learning Research}, month = {08--10 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v284/main/assets/liu25a/liu25a.pdf}, url = {https://proceedings.mlr.press/v284/liu25a.html}, abstract = {Avatar creation from human images presents challenges for direct neural approaches, which suffer from inconsistent predictions and poor interpretability due to the large parameter space with hundreds of ambiguous options. We propose a neurosymbolic tag-based annotation method that combines neural perceptual learning with symbolic semantic reasoning. Instead of directly predicting avatar parameters, our approach uses a neural network to predict semantic tags (hair length, curliness, direction) as an intermediate symbolic representation, then applies symbolic search algorithms to match optimal avatar assets. This neurosymbolic design produces higher annotator agreements (96.7% vs 31.0% for direct annotation), enables more consistent model predictions, and provides interpretable avatar selection with ranked alternatives. The tag-based system generalizes easily across rendering systems, requiring only new asset annotation while reusing human image tags. Experimental results demonstrate superior convergence, consistency, and visual quality compared to direct prediction methods, showing how neurosymbolic approaches can improve trustworthiness and interpretability in creative AI applications.} }
Endnote
%0 Conference Paper %T Neurosymbolic Tag-Based Annotation for Interpretable Avatar Creation %A Minghao Liu %A Zeyu Cheng %A Shen Sang %A Jing Liu %A James Davis %B Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning %C Proceedings of Machine Learning Research %D 2025 %E Leilani H. Gilpin %E Eleonora Giunchiglia %E Pascal Hitzler %E Emile van Krieken %F pmlr-v284-liu25a %I PMLR %P 589--624 %U https://proceedings.mlr.press/v284/liu25a.html %V 284 %X Avatar creation from human images presents challenges for direct neural approaches, which suffer from inconsistent predictions and poor interpretability due to the large parameter space with hundreds of ambiguous options. We propose a neurosymbolic tag-based annotation method that combines neural perceptual learning with symbolic semantic reasoning. Instead of directly predicting avatar parameters, our approach uses a neural network to predict semantic tags (hair length, curliness, direction) as an intermediate symbolic representation, then applies symbolic search algorithms to match optimal avatar assets. This neurosymbolic design produces higher annotator agreements (96.7% vs 31.0% for direct annotation), enables more consistent model predictions, and provides interpretable avatar selection with ranked alternatives. The tag-based system generalizes easily across rendering systems, requiring only new asset annotation while reusing human image tags. Experimental results demonstrate superior convergence, consistency, and visual quality compared to direct prediction methods, showing how neurosymbolic approaches can improve trustworthiness and interpretability in creative AI applications.
APA
Liu, M., Cheng, Z., Sang, S., Liu, J. & Davis, J.. (2025). Neurosymbolic Tag-Based Annotation for Interpretable Avatar Creation. Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, in Proceedings of Machine Learning Research 284:589-624 Available from https://proceedings.mlr.press/v284/liu25a.html.

Related Material