Interpretable Meta-Score for Model Performance: Extended Abstract

Alicja Gosiewska, Katarzyna Woźnica, Przemysław Biecek
ECMLPKDD Workshop on Meta-Knowledge Transfer, PMLR 191:75-77, 2022.

Abstract

In benchmarking different machine learning models, we are seeing a real boom—in many applications such as computer vision or neural language processing, more and more challenges are being created. We are therefore collecting more and more data on the performance of individual models or architectures, but the question remains as to how to decide which model gives the best results and whether one model is significantly better than another. The most commonly used measures, such as AUC, accuracy, or RMSE, return a numerical assessment of how well the predictions of the selected model satisfy specific properties: they correctly assign the probability of belonging to the chosen class, they are not wrong in assigning the predicted class, or the difference between the predictions and the true values is not large. From an application point of view, however, we lack information: (i) what is the probability that a given model gets a better performance model than another; (ii) whether the differences we observe between models are statistically significant; (iii) in most cases, the values of the selected model performance metrics are incomparable between different datasets, i.e., how to compare a model’s AUC improvement by $0.01$ if for one dataset the best achieved AUC is of the order of $0.9$ and for the other $0.7$. To address these shortcomings, in an earlier paper we introduce a new meta-measure of model performance—EPP. It is inspired by the Elo ranking used in chess and other sports games. By comparing the rankings of two players and transforming them accordingly, we obtain information on the probability that one player is better than the other. EPP adapts this property to the specific conditions of benchmarks in machine learning but allows for universal application in many benchmarking schemes. We emphasize this by introducing a unified terminology, the Unified Benchmark Ontology, and the description of the new measure is given in these terms. Hence, models are referred to as players and model performance to score.

Cite this Paper


BibTeX
@InProceedings{pmlr-v191-gosiewska22a, title = {Interpretable Meta-Score for Model Performance: Extended Abstract}, author = {Gosiewska, Alicja and Wo\'{z}nica, Katarzyna and Biecek, Przemys\l{}aw}, booktitle = {ECMLPKDD Workshop on Meta-Knowledge Transfer}, pages = {75--77}, year = {2022}, editor = {Brazdil, Pavel and van Rijn, Jan N. and Gouk, Henry and Mohr, Felix}, volume = {191}, series = {Proceedings of Machine Learning Research}, month = {23 Sep}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v191/gosiewska22a/gosiewska22a.pdf}, url = {https://proceedings.mlr.press/v191/gosiewska22a.html}, abstract = {In benchmarking different machine learning models, we are seeing a real boom—in many applications such as computer vision or neural language processing, more and more challenges are being created. We are therefore collecting more and more data on the performance of individual models or architectures, but the question remains as to how to decide which model gives the best results and whether one model is significantly better than another. The most commonly used measures, such as AUC, accuracy, or RMSE, return a numerical assessment of how well the predictions of the selected model satisfy specific properties: they correctly assign the probability of belonging to the chosen class, they are not wrong in assigning the predicted class, or the difference between the predictions and the true values is not large. From an application point of view, however, we lack information: (i) what is the probability that a given model gets a better performance model than another; (ii) whether the differences we observe between models are statistically significant; (iii) in most cases, the values of the selected model performance metrics are incomparable between different datasets, i.e., how to compare a model’s AUC improvement by $0.01$ if for one dataset the best achieved AUC is of the order of $0.9$ and for the other $0.7$. To address these shortcomings, in an earlier paper we introduce a new meta-measure of model performance—EPP. It is inspired by the Elo ranking used in chess and other sports games. By comparing the rankings of two players and transforming them accordingly, we obtain information on the probability that one player is better than the other. EPP adapts this property to the specific conditions of benchmarks in machine learning but allows for universal application in many benchmarking schemes. We emphasize this by introducing a unified terminology, the Unified Benchmark Ontology, and the description of the new measure is given in these terms. Hence, models are referred to as players and model performance to score.} }
Endnote
%0 Conference Paper %T Interpretable Meta-Score for Model Performance: Extended Abstract %A Alicja Gosiewska %A Katarzyna Woźnica %A Przemysław Biecek %B ECMLPKDD Workshop on Meta-Knowledge Transfer %C Proceedings of Machine Learning Research %D 2022 %E Pavel Brazdil %E Jan N. van Rijn %E Henry Gouk %E Felix Mohr %F pmlr-v191-gosiewska22a %I PMLR %P 75--77 %U https://proceedings.mlr.press/v191/gosiewska22a.html %V 191 %X In benchmarking different machine learning models, we are seeing a real boom—in many applications such as computer vision or neural language processing, more and more challenges are being created. We are therefore collecting more and more data on the performance of individual models or architectures, but the question remains as to how to decide which model gives the best results and whether one model is significantly better than another. The most commonly used measures, such as AUC, accuracy, or RMSE, return a numerical assessment of how well the predictions of the selected model satisfy specific properties: they correctly assign the probability of belonging to the chosen class, they are not wrong in assigning the predicted class, or the difference between the predictions and the true values is not large. From an application point of view, however, we lack information: (i) what is the probability that a given model gets a better performance model than another; (ii) whether the differences we observe between models are statistically significant; (iii) in most cases, the values of the selected model performance metrics are incomparable between different datasets, i.e., how to compare a model’s AUC improvement by $0.01$ if for one dataset the best achieved AUC is of the order of $0.9$ and for the other $0.7$. To address these shortcomings, in an earlier paper we introduce a new meta-measure of model performance—EPP. It is inspired by the Elo ranking used in chess and other sports games. By comparing the rankings of two players and transforming them accordingly, we obtain information on the probability that one player is better than the other. EPP adapts this property to the specific conditions of benchmarks in machine learning but allows for universal application in many benchmarking schemes. We emphasize this by introducing a unified terminology, the Unified Benchmark Ontology, and the description of the new measure is given in these terms. Hence, models are referred to as players and model performance to score.
APA
Gosiewska, A., Woźnica, K. & Biecek, P.. (2022). Interpretable Meta-Score for Model Performance: Extended Abstract. ECMLPKDD Workshop on Meta-Knowledge Transfer, in Proceedings of Machine Learning Research 191:75-77 Available from https://proceedings.mlr.press/v191/gosiewska22a.html.

Related Material