Bayesian Comparison of Machine Learning Algorithms on Single and Multiple Datasets

[edit]

Alexandre Lacoste, Francois Laviolette, Mario Marchand ;
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:665-675, 2012.

Abstract

We propose a new method for comparing learning algorithms on multiple tasks which is based on a novel non-parametric test that we call the Poisson binomial test. The key aspect of this work is that we provide a formal definition for what is meant to have an algorithm that is better than another. Also, we are able to take into account the dependencies induced when evaluating classifiers on the same test set. Finally we make optimal use (in the Bayesian sense) of all the testing data we have. We demonstrate empirically that our approach is more reliable than the sign test and the Wilcoxon signed rank test, the current state of the art for algorithm comparisons.

Related Material