[edit]
Improving Bias Metrics in Vision-Language Models by Addressing Inherent Model Disabilities
Proceedings of the Algorithmic Fairness Through the Lens of Metrics and Evaluation, PMLR 279:119-132, 2025.
Abstract
The integration of Vision-Language Models (VLMs) into various applications has high-lighted the importance of evaluating these models for inherent biases, especially alonggender and racial lines. Traditional bias assessment methods in VLMs typically rely onaccuracy metrics, assessing disparities in performance across different demographic groups.These methods, however, often overlook the impact of the model’s disabilities, like lack spa-tial reasoning, which may skew the bias assessment. In this work, we propose an approachthat systematically examines how current bias evaluation metrics account for the model’slimitations. We introduce two methods that circumvent these disabilities by integratingspatial guidance from textual and visual modalities. Our experiments aim to refine biasquantification by effectively mitigating the impact of spatial reasoning limitations, offeringa more accurate assessment of biases in VLMs.