Analyzing Fairness of Neural Network Prediction via Counterfactual Dataset Generation

Brian Hyeongseok Kim, Jacqueline Mitchell, Chao Wang
Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL), PMLR 307:247-262, 2026.

Abstract

Interpreting the inference-time behavior of deep neural networks remains a challenging problem. Existing approaches to counterfactual explanation typically ask: What is the closest alternative $\textit{input}$ that would alter the model’s prediction in a desired way? In contrast, we explore $\textbf{counterfactual datasets}$. Rather than perturbing the input, our method efficiently finds the closest alternative $\textit{training dataset}$, one that differs from the original dataset by changing a few labels. Training a new model on this altered dataset can then lead to a different prediction of a given test instance. This perspective provides a new way to assess fairness by directly analyzing the influence of label bias on training and inference. Our approach can be characterized as probing whether a given prediction depends on biased labels. Since exhaustively enumerating all possible alternate datasets is infeasible, we develop analysis techniques that trace how bias in the training data may propagate through the learning algorithm to the trained network. Our method heuristically ranks and modifies the labels of a bounded number of training examples to construct a counterfactual dataset, retrains the model, and checks whether its prediction on a chosen test case changes. We evaluate our approach on feedforward neural networks across over 1100 test cases from 7 widely-used fairness datasets. Results show that it modifies only a small subset of training labels, highlighting its ability to pinpoint the critical training examples that drive prediction changes. Finally, we demonstrate how counterfactual training datasets reveal connections between training examples and test cases, offering an interpretable way to probe dataset bias.

Cite this Paper


BibTeX
@InProceedings{pmlr-v307-kim26a, title = {Analyzing Fairness of Neural Network Prediction via Counterfactual Dataset Generation}, author = {Kim, Brian Hyeongseok and Mitchell, Jacqueline and Wang, Chao}, booktitle = {Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL)}, pages = {247--262}, year = {2026}, editor = {Kim, Hyeongji and Ramírez Rivera, Adín and Ricaud, Benjamin}, volume = {307}, series = {Proceedings of Machine Learning Research}, month = {06--08 Jan}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v307/main/assets/kim26a/kim26a.pdf}, url = {https://proceedings.mlr.press/v307/kim26a.html}, abstract = {Interpreting the inference-time behavior of deep neural networks remains a challenging problem. Existing approaches to counterfactual explanation typically ask: What is the closest alternative $\textit{input}$ that would alter the model’s prediction in a desired way? In contrast, we explore $\textbf{counterfactual datasets}$. Rather than perturbing the input, our method efficiently finds the closest alternative $\textit{training dataset}$, one that differs from the original dataset by changing a few labels. Training a new model on this altered dataset can then lead to a different prediction of a given test instance. This perspective provides a new way to assess fairness by directly analyzing the influence of label bias on training and inference. Our approach can be characterized as probing whether a given prediction depends on biased labels. Since exhaustively enumerating all possible alternate datasets is infeasible, we develop analysis techniques that trace how bias in the training data may propagate through the learning algorithm to the trained network. Our method heuristically ranks and modifies the labels of a bounded number of training examples to construct a counterfactual dataset, retrains the model, and checks whether its prediction on a chosen test case changes. We evaluate our approach on feedforward neural networks across over 1100 test cases from 7 widely-used fairness datasets. Results show that it modifies only a small subset of training labels, highlighting its ability to pinpoint the critical training examples that drive prediction changes. Finally, we demonstrate how counterfactual training datasets reveal connections between training examples and test cases, offering an interpretable way to probe dataset bias.} }
Endnote
%0 Conference Paper %T Analyzing Fairness of Neural Network Prediction via Counterfactual Dataset Generation %A Brian Hyeongseok Kim %A Jacqueline Mitchell %A Chao Wang %B Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL) %C Proceedings of Machine Learning Research %D 2026 %E Hyeongji Kim %E Adín Ramírez Rivera %E Benjamin Ricaud %F pmlr-v307-kim26a %I PMLR %P 247--262 %U https://proceedings.mlr.press/v307/kim26a.html %V 307 %X Interpreting the inference-time behavior of deep neural networks remains a challenging problem. Existing approaches to counterfactual explanation typically ask: What is the closest alternative $\textit{input}$ that would alter the model’s prediction in a desired way? In contrast, we explore $\textbf{counterfactual datasets}$. Rather than perturbing the input, our method efficiently finds the closest alternative $\textit{training dataset}$, one that differs from the original dataset by changing a few labels. Training a new model on this altered dataset can then lead to a different prediction of a given test instance. This perspective provides a new way to assess fairness by directly analyzing the influence of label bias on training and inference. Our approach can be characterized as probing whether a given prediction depends on biased labels. Since exhaustively enumerating all possible alternate datasets is infeasible, we develop analysis techniques that trace how bias in the training data may propagate through the learning algorithm to the trained network. Our method heuristically ranks and modifies the labels of a bounded number of training examples to construct a counterfactual dataset, retrains the model, and checks whether its prediction on a chosen test case changes. We evaluate our approach on feedforward neural networks across over 1100 test cases from 7 widely-used fairness datasets. Results show that it modifies only a small subset of training labels, highlighting its ability to pinpoint the critical training examples that drive prediction changes. Finally, we demonstrate how counterfactual training datasets reveal connections between training examples and test cases, offering an interpretable way to probe dataset bias.
APA
Kim, B.H., Mitchell, J. & Wang, C.. (2026). Analyzing Fairness of Neural Network Prediction via Counterfactual Dataset Generation. Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL), in Proceedings of Machine Learning Research 307:247-262 Available from https://proceedings.mlr.press/v307/kim26a.html.

Related Material