[edit]
CRAFT: A Neuro-Symbolic Framework for Visual Functional Affordance Grounding
Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, PMLR 284:343-352, 2025.
Abstract
We introduce CRAFT, a neuro-symbolic framework for interpretable affordance grounding, which identifies the objects in a scene that enable a given action (e.g., “cut”). CRAFT integrates structured commonsense priors from ConceptNet and language models with visual evidence from CLIP, using an energy-based reasoning loop to refine predictions iteratively. This process yields transparent, goal-driven decisions to ground symbolic and perceptual structures. Experiments in multi-object, label-free settings demonstrate that CRAFT enhances accuracy while improving interpretability, providing a step toward robust and trustworthy scene understanding.