[edit]
Effect of Incomplete Meta-dataset on Average Ranking Method
Proceedings of the Workshop on Automatic Machine Learning, PMLR 64:1-10, 2016.
Abstract
One of the simplest metalearning methods is the average ranking method. This method uses metadata in the form of test results of a given algorithms on a given datasets and calculates an average rank for each algorithm. The average ranks are used to construct the average ranking. The work described here investigate the problem of how the process of generating the average ranking is affected by incomplete metadata. We are interested in this issue for the following reason. If we could show that incomplete metadata does not affect the final results much, we could explore it in future design. We could simply conduct fewer tests and save thus computation time. Our results show that our method is robust to omission in meta datasets.