ODD: Overlap-aware Estimation of Model Performance under Distribution Shift

Aayush Mishra, Anqi Liu
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:3031-3047, 2025.

Abstract

Reliable and accurate estimation of the error of an ML model in unseen test domains is an important problem for safe intelligent systems. Prior work uses \textit{disagreement discrepancy} (\disdis) to derive practical error bounds under distribution shifts. It optimizes for a maximally disagreeing classifier on the target domain to bound the error of a given source classifier. Although this approach offers a reliable and competitively accurate estimate of the target error, we identify a problem in this approach which causes the disagreement discrepancy objective to compete in the overlapping region between source and target domains. With an intuitive assumption that the target disagreement should be no more than the source disagreement in the overlapping region due to high enough support, we devise Overlap-aware Disagreement Discrepancy (\odd). Our \odd-based bound uses domain-classifiers to estimate domain-overlap and better predicts target performance than \disdis. We conduct experiments on a wide array of benchmarks to show that our method improves the overall performance-estimation error while remaining valid and reliable. Our code and results are available on \href{https://github.com/aamixsh/odd}{GitHub}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-mishra25a, title = {ODD: Overlap-aware Estimation of Model Performance under Distribution Shift}, author = {Mishra, Aayush and Liu, Anqi}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {3031--3047}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/mishra25a/mishra25a.pdf}, url = {https://proceedings.mlr.press/v286/mishra25a.html}, abstract = {Reliable and accurate estimation of the error of an ML model in unseen test domains is an important problem for safe intelligent systems. Prior work uses \textit{disagreement discrepancy} (\disdis) to derive practical error bounds under distribution shifts. It optimizes for a maximally disagreeing classifier on the target domain to bound the error of a given source classifier. Although this approach offers a reliable and competitively accurate estimate of the target error, we identify a problem in this approach which causes the disagreement discrepancy objective to compete in the overlapping region between source and target domains. With an intuitive assumption that the target disagreement should be no more than the source disagreement in the overlapping region due to high enough support, we devise Overlap-aware Disagreement Discrepancy (\odd). Our \odd-based bound uses domain-classifiers to estimate domain-overlap and better predicts target performance than \disdis. We conduct experiments on a wide array of benchmarks to show that our method improves the overall performance-estimation error while remaining valid and reliable. Our code and results are available on \href{https://github.com/aamixsh/odd}{GitHub}.} }
Endnote
%0 Conference Paper %T ODD: Overlap-aware Estimation of Model Performance under Distribution Shift %A Aayush Mishra %A Anqi Liu %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-mishra25a %I PMLR %P 3031--3047 %U https://proceedings.mlr.press/v286/mishra25a.html %V 286 %X Reliable and accurate estimation of the error of an ML model in unseen test domains is an important problem for safe intelligent systems. Prior work uses \textit{disagreement discrepancy} (\disdis) to derive practical error bounds under distribution shifts. It optimizes for a maximally disagreeing classifier on the target domain to bound the error of a given source classifier. Although this approach offers a reliable and competitively accurate estimate of the target error, we identify a problem in this approach which causes the disagreement discrepancy objective to compete in the overlapping region between source and target domains. With an intuitive assumption that the target disagreement should be no more than the source disagreement in the overlapping region due to high enough support, we devise Overlap-aware Disagreement Discrepancy (\odd). Our \odd-based bound uses domain-classifiers to estimate domain-overlap and better predicts target performance than \disdis. We conduct experiments on a wide array of benchmarks to show that our method improves the overall performance-estimation error while remaining valid and reliable. Our code and results are available on \href{https://github.com/aamixsh/odd}{GitHub}.
APA
Mishra, A. & Liu, A.. (2025). ODD: Overlap-aware Estimation of Model Performance under Distribution Shift. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:3031-3047 Available from https://proceedings.mlr.press/v286/mishra25a.html.

Related Material