Effect of Incomplete Meta-dataset on Average Ranking Method

Salisu Mamman Abdulrahman, Pavel Brazdil
Proceedings of the Workshop on Automatic Machine Learning, PMLR 64:1-10, 2016.

Abstract

One of the simplest metalearning methods is the average ranking method. This method uses metadata in the form of test results of a given algorithms on a given datasets and calculates an average rank for each algorithm. The average ranks are used to construct the average ranking. The work described here investigate the problem of how the process of generating the average ranking is affected by incomplete metadata. We are interested in this issue for the following reason. If we could show that incomplete metadata does not affect the final results much, we could explore it in future design. We could simply conduct fewer tests and save thus computation time. Our results show that our method is robust to omission in meta datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v64-adbdulrahman_effect_2016, title = {Effect of Incomplete Meta-dataset on Average Ranking Method}, author = {Abdulrahman, Salisu Mamman and Brazdil, Pavel}, booktitle = {Proceedings of the Workshop on Automatic Machine Learning}, pages = {1--10}, year = {2016}, editor = {Hutter, Frank and Kotthoff, Lars and Vanschoren, Joaquin}, volume = {64}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v64/adbdulrahman_effect_2016.pdf}, url = {https://proceedings.mlr.press/v64/adbdulrahman_effect_2016.html}, abstract = {One of the simplest metalearning methods is the average ranking method. This method uses metadata in the form of test results of a given algorithms on a given datasets and calculates an average rank for each algorithm. The average ranks are used to construct the average ranking. The work described here investigate the problem of how the process of generating the average ranking is affected by incomplete metadata. We are interested in this issue for the following reason. If we could show that incomplete metadata does not affect the final results much, we could explore it in future design. We could simply conduct fewer tests and save thus computation time. Our results show that our method is robust to omission in meta datasets.} }
Endnote
%0 Conference Paper %T Effect of Incomplete Meta-dataset on Average Ranking Method %A Salisu Mamman Abdulrahman %A Pavel Brazdil %B Proceedings of the Workshop on Automatic Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Frank Hutter %E Lars Kotthoff %E Joaquin Vanschoren %F pmlr-v64-adbdulrahman_effect_2016 %I PMLR %P 1--10 %U https://proceedings.mlr.press/v64/adbdulrahman_effect_2016.html %V 64 %X One of the simplest metalearning methods is the average ranking method. This method uses metadata in the form of test results of a given algorithms on a given datasets and calculates an average rank for each algorithm. The average ranks are used to construct the average ranking. The work described here investigate the problem of how the process of generating the average ranking is affected by incomplete metadata. We are interested in this issue for the following reason. If we could show that incomplete metadata does not affect the final results much, we could explore it in future design. We could simply conduct fewer tests and save thus computation time. Our results show that our method is robust to omission in meta datasets.
RIS
TY - CPAPER TI - Effect of Incomplete Meta-dataset on Average Ranking Method AU - Salisu Mamman Abdulrahman AU - Pavel Brazdil BT - Proceedings of the Workshop on Automatic Machine Learning DA - 2016/12/04 ED - Frank Hutter ED - Lars Kotthoff ED - Joaquin Vanschoren ID - pmlr-v64-adbdulrahman_effect_2016 PB - PMLR DP - Proceedings of Machine Learning Research VL - 64 SP - 1 EP - 10 L1 - http://proceedings.mlr.press/v64/adbdulrahman_effect_2016.pdf UR - https://proceedings.mlr.press/v64/adbdulrahman_effect_2016.html AB - One of the simplest metalearning methods is the average ranking method. This method uses metadata in the form of test results of a given algorithms on a given datasets and calculates an average rank for each algorithm. The average ranks are used to construct the average ranking. The work described here investigate the problem of how the process of generating the average ranking is affected by incomplete metadata. We are interested in this issue for the following reason. If we could show that incomplete metadata does not affect the final results much, we could explore it in future design. We could simply conduct fewer tests and save thus computation time. Our results show that our method is robust to omission in meta datasets. ER -
APA
Abdulrahman, S.M. & Brazdil, P.. (2016). Effect of Incomplete Meta-dataset on Average Ranking Method. Proceedings of the Workshop on Automatic Machine Learning, in Proceedings of Machine Learning Research 64:1-10 Available from https://proceedings.mlr.press/v64/adbdulrahman_effect_2016.html.

Related Material