SmartCal: A Novel Automated Approach to Classifier Probability Calibration

Mohamed Maher Abdelrahman, Osama Fayez Oun, Youssef Medhat, Mariam Magdy Elseedawy, Yara Mostafa Marei, Abdullah Ibrahim, Radwa Mohamed El Shawi
Proceedings of the Fourth International Conference on Automated Machine Learning, PMLR 293:12/1-14, 2025.

Abstract

Accurate probability estimates are crucial in classification, yet widely used calibration methods like Platt and temperature scaling fail to generalize across diverse datasets. We introduce SmartCal, an AutoML framework that automatically selects the optimal post-hoc calibration strategy from a pool of 12 methods. Using a large-scale knowledge base of 165 datasets in multiple modalities and 13 classifiers, we show that no single calibrator is universally superior. SmartCal employs a meta-model trained on the meta-features of the calibration splits and classifier output to recommend the best calibration method for new tasks. Additionally, Bayesian optimization refines this selection process, outperforming standard baselines and random search. Experiments demonstrate that SmartCal systematically improves the calibration over existing approaches such as Beta Calibration and Temperature Scaling. This tool is freely available with a unified interface, simplifying the calibration process for researchers and practitioners.

Cite this Paper


BibTeX
@InProceedings{pmlr-v293-abdelrahman25a, title = {SmartCal: A Novel Automated Approach to Classifier Probability Calibration}, author = {Abdelrahman, Mohamed Maher and Oun, Osama Fayez and Medhat, Youssef and Elseedawy, Mariam Magdy and Marei, Yara Mostafa and Ibrahim, Abdullah and Shawi, Radwa Mohamed El}, booktitle = {Proceedings of the Fourth International Conference on Automated Machine Learning}, pages = {12/1--14}, year = {2025}, editor = {Akoglu, Leman and Doerr, Carola and van Rijn, Jan N. and Garnett, Roman and Gardner, Jacob R.}, volume = {293}, series = {Proceedings of Machine Learning Research}, month = {08--11 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v293/main/assets/abdelrahman25a/abdelrahman25a.pdf}, url = {https://proceedings.mlr.press/v293/abdelrahman25a.html}, abstract = {Accurate probability estimates are crucial in classification, yet widely used calibration methods like Platt and temperature scaling fail to generalize across diverse datasets. We introduce SmartCal, an AutoML framework that automatically selects the optimal post-hoc calibration strategy from a pool of 12 methods. Using a large-scale knowledge base of 165 datasets in multiple modalities and 13 classifiers, we show that no single calibrator is universally superior. SmartCal employs a meta-model trained on the meta-features of the calibration splits and classifier output to recommend the best calibration method for new tasks. Additionally, Bayesian optimization refines this selection process, outperforming standard baselines and random search. Experiments demonstrate that SmartCal systematically improves the calibration over existing approaches such as Beta Calibration and Temperature Scaling. This tool is freely available with a unified interface, simplifying the calibration process for researchers and practitioners.} }
Endnote
%0 Conference Paper %T SmartCal: A Novel Automated Approach to Classifier Probability Calibration %A Mohamed Maher Abdelrahman %A Osama Fayez Oun %A Youssef Medhat %A Mariam Magdy Elseedawy %A Yara Mostafa Marei %A Abdullah Ibrahim %A Radwa Mohamed El Shawi %B Proceedings of the Fourth International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Leman Akoglu %E Carola Doerr %E Jan N. van Rijn %E Roman Garnett %E Jacob R. Gardner %F pmlr-v293-abdelrahman25a %I PMLR %P 12/1--14 %U https://proceedings.mlr.press/v293/abdelrahman25a.html %V 293 %X Accurate probability estimates are crucial in classification, yet widely used calibration methods like Platt and temperature scaling fail to generalize across diverse datasets. We introduce SmartCal, an AutoML framework that automatically selects the optimal post-hoc calibration strategy from a pool of 12 methods. Using a large-scale knowledge base of 165 datasets in multiple modalities and 13 classifiers, we show that no single calibrator is universally superior. SmartCal employs a meta-model trained on the meta-features of the calibration splits and classifier output to recommend the best calibration method for new tasks. Additionally, Bayesian optimization refines this selection process, outperforming standard baselines and random search. Experiments demonstrate that SmartCal systematically improves the calibration over existing approaches such as Beta Calibration and Temperature Scaling. This tool is freely available with a unified interface, simplifying the calibration process for researchers and practitioners.
APA
Abdelrahman, M.M., Oun, O.F., Medhat, Y., Elseedawy, M.M., Marei, Y.M., Ibrahim, A. & Shawi, R.M.E.. (2025). SmartCal: A Novel Automated Approach to Classifier Probability Calibration. Proceedings of the Fourth International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 293:12/1-14 Available from https://proceedings.mlr.press/v293/abdelrahman25a.html.

Related Material