Robust ML Auditing using Prior Knowledge

Jade Garcia Bourrée, Augustin Godinot, Sayan Biswas, Anne-Marie Kermarrec, Erwan Le Merrer, Gilles Tredan, Martijn De Vos, Milos Vujasinovic
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:18794-18810, 2025.

Abstract

Among the many technical challenges to enforcing AI regulations, one crucial yet underexplored problem is the risk of audit manipulation. This manipulation occurs when a platform deliberately alters its answers to a regulator to pass an audit without modifying its answers to other users. In this paper, we introduce a novel approach to manipulation-proof auditing by taking into account the auditor’s prior knowledge of the task solved by the platform. We first demonstrate that regulators must not rely on public priors (e.g. a public dataset), as platforms could easily fool the auditor in such cases. We then formally establish the conditions under which an auditor can prevent audit manipulations using prior knowledge about the ground truth. Finally, our experiments with two standard datasets illustrate the maximum level of unfairness a platform can hide before being detected as malicious. Our formalization and generalization of manipulation-proof auditing with a prior opens up new research directions for more robust fairness audits.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-garcia-bourree25a, title = {Robust {ML} Auditing using Prior Knowledge}, author = {Garcia Bourr\'{e}e, Jade and Godinot, Augustin and Biswas, Sayan and Kermarrec, Anne-Marie and Le Merrer, Erwan and Tredan, Gilles and De Vos, Martijn and Vujasinovic, Milos}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {18794--18810}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/garcia-bourree25a/garcia-bourree25a.pdf}, url = {https://proceedings.mlr.press/v267/garcia-bourree25a.html}, abstract = {Among the many technical challenges to enforcing AI regulations, one crucial yet underexplored problem is the risk of audit manipulation. This manipulation occurs when a platform deliberately alters its answers to a regulator to pass an audit without modifying its answers to other users. In this paper, we introduce a novel approach to manipulation-proof auditing by taking into account the auditor’s prior knowledge of the task solved by the platform. We first demonstrate that regulators must not rely on public priors (e.g. a public dataset), as platforms could easily fool the auditor in such cases. We then formally establish the conditions under which an auditor can prevent audit manipulations using prior knowledge about the ground truth. Finally, our experiments with two standard datasets illustrate the maximum level of unfairness a platform can hide before being detected as malicious. Our formalization and generalization of manipulation-proof auditing with a prior opens up new research directions for more robust fairness audits.} }
Endnote
%0 Conference Paper %T Robust ML Auditing using Prior Knowledge %A Jade Garcia Bourrée %A Augustin Godinot %A Sayan Biswas %A Anne-Marie Kermarrec %A Erwan Le Merrer %A Gilles Tredan %A Martijn De Vos %A Milos Vujasinovic %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-garcia-bourree25a %I PMLR %P 18794--18810 %U https://proceedings.mlr.press/v267/garcia-bourree25a.html %V 267 %X Among the many technical challenges to enforcing AI regulations, one crucial yet underexplored problem is the risk of audit manipulation. This manipulation occurs when a platform deliberately alters its answers to a regulator to pass an audit without modifying its answers to other users. In this paper, we introduce a novel approach to manipulation-proof auditing by taking into account the auditor’s prior knowledge of the task solved by the platform. We first demonstrate that regulators must not rely on public priors (e.g. a public dataset), as platforms could easily fool the auditor in such cases. We then formally establish the conditions under which an auditor can prevent audit manipulations using prior knowledge about the ground truth. Finally, our experiments with two standard datasets illustrate the maximum level of unfairness a platform can hide before being detected as malicious. Our formalization and generalization of manipulation-proof auditing with a prior opens up new research directions for more robust fairness audits.
APA
Garcia Bourrée, J., Godinot, A., Biswas, S., Kermarrec, A., Le Merrer, E., Tredan, G., De Vos, M. & Vujasinovic, M.. (2025). Robust ML Auditing using Prior Knowledge. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:18794-18810 Available from https://proceedings.mlr.press/v267/garcia-bourree25a.html.

Related Material