Robust Yet Efficient Conformal Prediction Sets

Soroush H. Zargarbashi, Mohammad Sadegh Akhondzadeh, Aleksandar Bojchevski
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:17123-17147, 2024.

Abstract

Conformal prediction (CP) can convert any model’s output into prediction sets guaranteed to include the true label with any user-specified probability. However, same as the model itself, CP is vulnerable to adversarial test examples (evasion) and perturbed calibration data (poisoning). We derive provably robust sets by bounding the worst-case change in conformity scores. Our tighter bounds lead to more efficient sets. We cover both continuous and discrete (sparse) data and our guarantees work both for evasion and poisoning attacks (on both features and labels).

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-h-zargarbashi24a, title = {Robust Yet Efficient Conformal Prediction Sets}, author = {H. Zargarbashi, Soroush and Akhondzadeh, Mohammad Sadegh and Bojchevski, Aleksandar}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {17123--17147}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/h-zargarbashi24a/h-zargarbashi24a.pdf}, url = {https://proceedings.mlr.press/v235/h-zargarbashi24a.html}, abstract = {Conformal prediction (CP) can convert any model’s output into prediction sets guaranteed to include the true label with any user-specified probability. However, same as the model itself, CP is vulnerable to adversarial test examples (evasion) and perturbed calibration data (poisoning). We derive provably robust sets by bounding the worst-case change in conformity scores. Our tighter bounds lead to more efficient sets. We cover both continuous and discrete (sparse) data and our guarantees work both for evasion and poisoning attacks (on both features and labels).} }
Endnote
%0 Conference Paper %T Robust Yet Efficient Conformal Prediction Sets %A Soroush H. Zargarbashi %A Mohammad Sadegh Akhondzadeh %A Aleksandar Bojchevski %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-h-zargarbashi24a %I PMLR %P 17123--17147 %U https://proceedings.mlr.press/v235/h-zargarbashi24a.html %V 235 %X Conformal prediction (CP) can convert any model’s output into prediction sets guaranteed to include the true label with any user-specified probability. However, same as the model itself, CP is vulnerable to adversarial test examples (evasion) and perturbed calibration data (poisoning). We derive provably robust sets by bounding the worst-case change in conformity scores. Our tighter bounds lead to more efficient sets. We cover both continuous and discrete (sparse) data and our guarantees work both for evasion and poisoning attacks (on both features and labels).
APA
H. Zargarbashi, S., Akhondzadeh, M.S. & Bojchevski, A.. (2024). Robust Yet Efficient Conformal Prediction Sets. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:17123-17147 Available from https://proceedings.mlr.press/v235/h-zargarbashi24a.html.

Related Material