Exact Optimization of Conformal Predictors via Incremental and Decremental Learning

Giovanni Cherubin, Konstantinos Chatzikokolakis, Martin Jaggi
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1836-1845, 2021.

Abstract

Conformal Predictors (CP) are wrappers around ML models, providing error guarantees under weak assumptions on the data distribution. They are suitable for a wide range of problems, from classification and regression to anomaly detection. Unfortunately, their very high computational complexity limits their applicability to large datasets. In this work, we show that it is possible to speed up a CP classifier considerably, by studying it in conjunction with the underlying ML method, and by exploiting incremental&decremental learning. For methods such as k-NN, KDE, and kernel LS-SVM, our approach reduces the running time by one order of magnitude, whilst producing exact solutions. With similar ideas, we also achieve a linear speed up for the harder case of bootstrapping. Finally, we extend these techniques to improve upon an optimization of k-NN CP for regression. We evaluate our findings empirically, and discuss when methods are suitable for CP optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-cherubin21a, title = {Exact Optimization of Conformal Predictors via Incremental and Decremental Learning}, author = {Cherubin, Giovanni and Chatzikokolakis, Konstantinos and Jaggi, Martin}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1836--1845}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/cherubin21a/cherubin21a.pdf}, url = {https://proceedings.mlr.press/v139/cherubin21a.html}, abstract = {Conformal Predictors (CP) are wrappers around ML models, providing error guarantees under weak assumptions on the data distribution. They are suitable for a wide range of problems, from classification and regression to anomaly detection. Unfortunately, their very high computational complexity limits their applicability to large datasets. In this work, we show that it is possible to speed up a CP classifier considerably, by studying it in conjunction with the underlying ML method, and by exploiting incremental&decremental learning. For methods such as k-NN, KDE, and kernel LS-SVM, our approach reduces the running time by one order of magnitude, whilst producing exact solutions. With similar ideas, we also achieve a linear speed up for the harder case of bootstrapping. Finally, we extend these techniques to improve upon an optimization of k-NN CP for regression. We evaluate our findings empirically, and discuss when methods are suitable for CP optimization.} }
Endnote
%0 Conference Paper %T Exact Optimization of Conformal Predictors via Incremental and Decremental Learning %A Giovanni Cherubin %A Konstantinos Chatzikokolakis %A Martin Jaggi %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-cherubin21a %I PMLR %P 1836--1845 %U https://proceedings.mlr.press/v139/cherubin21a.html %V 139 %X Conformal Predictors (CP) are wrappers around ML models, providing error guarantees under weak assumptions on the data distribution. They are suitable for a wide range of problems, from classification and regression to anomaly detection. Unfortunately, their very high computational complexity limits their applicability to large datasets. In this work, we show that it is possible to speed up a CP classifier considerably, by studying it in conjunction with the underlying ML method, and by exploiting incremental&decremental learning. For methods such as k-NN, KDE, and kernel LS-SVM, our approach reduces the running time by one order of magnitude, whilst producing exact solutions. With similar ideas, we also achieve a linear speed up for the harder case of bootstrapping. Finally, we extend these techniques to improve upon an optimization of k-NN CP for regression. We evaluate our findings empirically, and discuss when methods are suitable for CP optimization.
APA
Cherubin, G., Chatzikokolakis, K. & Jaggi, M.. (2021). Exact Optimization of Conformal Predictors via Incremental and Decremental Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1836-1845 Available from https://proceedings.mlr.press/v139/cherubin21a.html.

Related Material