Improving Bias Metrics in Vision-Language Models by Addressing Inherent Model Disabilities

Lakshmipathi Balaji Darur, Shanmukha Sai Keerthi Gouravarapu, Shashwat Goel, Ponnurangam Kumaraguru
Proceedings of the Algorithmic Fairness Through the Lens of Metrics and Evaluation, PMLR 279:119-132, 2025.

Abstract

The integration of Vision-Language Models (VLMs) into various applications has high-lighted the importance of evaluating these models for inherent biases, especially alonggender and racial lines. Traditional bias assessment methods in VLMs typically rely onaccuracy metrics, assessing disparities in performance across different demographic groups.These methods, however, often overlook the impact of the model’s disabilities, like lack spa-tial reasoning, which may skew the bias assessment. In this work, we propose an approachthat systematically examines how current bias evaluation metrics account for the model’slimitations. We introduce two methods that circumvent these disabilities by integratingspatial guidance from textual and visual modalities. Our experiments aim to refine biasquantification by effectively mitigating the impact of spatial reasoning limitations, offeringa more accurate assessment of biases in VLMs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v279-darur25a, title = {Improving Bias Metrics in Vision-Language Models by Addressing Inherent Model Disabilities}, author = {Darur, Lakshmipathi Balaji and Gouravarapu, Shanmukha Sai Keerthi and Goel, Shashwat and Kumaraguru, Ponnurangam}, booktitle = {Proceedings of the Algorithmic Fairness Through the Lens of Metrics and Evaluation}, pages = {119--132}, year = {2025}, editor = {Rateike, Miriam and Dieng, Awa and Watson-Daniels, Jamelle and Fioretto, Ferdinando and Farnadi, Golnoosh}, volume = {279}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v279/main/assets/darur25a/darur25a.pdf}, url = {https://proceedings.mlr.press/v279/darur25a.html}, abstract = {The integration of Vision-Language Models (VLMs) into various applications has high-lighted the importance of evaluating these models for inherent biases, especially alonggender and racial lines. Traditional bias assessment methods in VLMs typically rely onaccuracy metrics, assessing disparities in performance across different demographic groups.These methods, however, often overlook the impact of the model’s disabilities, like lack spa-tial reasoning, which may skew the bias assessment. In this work, we propose an approachthat systematically examines how current bias evaluation metrics account for the model’slimitations. We introduce two methods that circumvent these disabilities by integratingspatial guidance from textual and visual modalities. Our experiments aim to refine biasquantification by effectively mitigating the impact of spatial reasoning limitations, offeringa more accurate assessment of biases in VLMs.} }
Endnote
%0 Conference Paper %T Improving Bias Metrics in Vision-Language Models by Addressing Inherent Model Disabilities %A Lakshmipathi Balaji Darur %A Shanmukha Sai Keerthi Gouravarapu %A Shashwat Goel %A Ponnurangam Kumaraguru %B Proceedings of the Algorithmic Fairness Through the Lens of Metrics and Evaluation %C Proceedings of Machine Learning Research %D 2025 %E Miriam Rateike %E Awa Dieng %E Jamelle Watson-Daniels %E Ferdinando Fioretto %E Golnoosh Farnadi %F pmlr-v279-darur25a %I PMLR %P 119--132 %U https://proceedings.mlr.press/v279/darur25a.html %V 279 %X The integration of Vision-Language Models (VLMs) into various applications has high-lighted the importance of evaluating these models for inherent biases, especially alonggender and racial lines. Traditional bias assessment methods in VLMs typically rely onaccuracy metrics, assessing disparities in performance across different demographic groups.These methods, however, often overlook the impact of the model’s disabilities, like lack spa-tial reasoning, which may skew the bias assessment. In this work, we propose an approachthat systematically examines how current bias evaluation metrics account for the model’slimitations. We introduce two methods that circumvent these disabilities by integratingspatial guidance from textual and visual modalities. Our experiments aim to refine biasquantification by effectively mitigating the impact of spatial reasoning limitations, offeringa more accurate assessment of biases in VLMs.
APA
Darur, L.B., Gouravarapu, S.S.K., Goel, S. & Kumaraguru, P.. (2025). Improving Bias Metrics in Vision-Language Models by Addressing Inherent Model Disabilities. Proceedings of the Algorithmic Fairness Through the Lens of Metrics and Evaluation, in Proceedings of Machine Learning Research 279:119-132 Available from https://proceedings.mlr.press/v279/darur25a.html.

Related Material