Position: Explainable AI Cannot Advance Without Better User Studies

Matej Pičulin, Bernarda Petek, Irena Ograjenšek, Erik Strumbelj
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:81977-81994, 2025.

Abstract

In this position paper, we argue that user studies are key to understanding the value of explainable AI methods, because the end goal of explainable AI is to satisfy societal desiderata. We also argue that the current state of user studies is detrimental to the advancement of the field. We support this argument with a review of general and explainable AI-specific challenges, as well as an analysis of 607 explainable AI papers featuring user studies. We demonstrate how most user studies lack reproducibility, discussion of limitations, comparison with a baseline, or placebo explanations and are of low fidelity to real-world users and application context. This, combined with an overreliance on functional evaluation, results in a lack of understanding of the value explainable AI methods, which hinders the progress of the field. To address this issue, we call for higher methodological standards for user studies, greater appreciation of high-quality user studies in the AI community, and reduced reliance on functional evaluation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-piculin25a, title = {Position: Explainable {AI} Cannot Advance Without Better User Studies}, author = {Pi\v{c}ulin, Matej and Petek, Bernarda and Ograjen\v{s}ek, Irena and Strumbelj, Erik}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {81977--81994}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/piculin25a/piculin25a.pdf}, url = {https://proceedings.mlr.press/v267/piculin25a.html}, abstract = {In this position paper, we argue that user studies are key to understanding the value of explainable AI methods, because the end goal of explainable AI is to satisfy societal desiderata. We also argue that the current state of user studies is detrimental to the advancement of the field. We support this argument with a review of general and explainable AI-specific challenges, as well as an analysis of 607 explainable AI papers featuring user studies. We demonstrate how most user studies lack reproducibility, discussion of limitations, comparison with a baseline, or placebo explanations and are of low fidelity to real-world users and application context. This, combined with an overreliance on functional evaluation, results in a lack of understanding of the value explainable AI methods, which hinders the progress of the field. To address this issue, we call for higher methodological standards for user studies, greater appreciation of high-quality user studies in the AI community, and reduced reliance on functional evaluation.} }
Endnote
%0 Conference Paper %T Position: Explainable AI Cannot Advance Without Better User Studies %A Matej Pičulin %A Bernarda Petek %A Irena Ograjenšek %A Erik Strumbelj %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-piculin25a %I PMLR %P 81977--81994 %U https://proceedings.mlr.press/v267/piculin25a.html %V 267 %X In this position paper, we argue that user studies are key to understanding the value of explainable AI methods, because the end goal of explainable AI is to satisfy societal desiderata. We also argue that the current state of user studies is detrimental to the advancement of the field. We support this argument with a review of general and explainable AI-specific challenges, as well as an analysis of 607 explainable AI papers featuring user studies. We demonstrate how most user studies lack reproducibility, discussion of limitations, comparison with a baseline, or placebo explanations and are of low fidelity to real-world users and application context. This, combined with an overreliance on functional evaluation, results in a lack of understanding of the value explainable AI methods, which hinders the progress of the field. To address this issue, we call for higher methodological standards for user studies, greater appreciation of high-quality user studies in the AI community, and reduced reliance on functional evaluation.
APA
Pičulin, M., Petek, B., Ograjenšek, I. & Strumbelj, E.. (2025). Position: Explainable AI Cannot Advance Without Better User Studies. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:81977-81994 Available from https://proceedings.mlr.press/v267/piculin25a.html.

Related Material