Online Boosting Algorithms for Multi-label Ranking

Young Hun Jung, Ambuj Tewari
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:279-287, 2018.

Abstract

We consider the multi-label ranking approach to multi-label learning. Boosting is a natural method for multi-label ranking as it aggregates weak predictions through majority votes, which can be directly used as scores to produce a ranking of the labels. We design online boosting algorithms with provable loss bounds for multi-label ranking. We show that our first algorithm is optimal in terms of the number of learners required to attain a desired accuracy, but it requires knowledge of the edge of the weak learners. We also design an adaptive algorithm that does not require this knowledge and is hence more practical. Experimental results on real data sets demonstrate that our algorithms are at least as good as existing batch boosting algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-jung18a, title = {Online Boosting Algorithms for Multi-label Ranking}, author = {Jung, Young Hun and Tewari, Ambuj}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {279--287}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/jung18a/jung18a.pdf}, url = {https://proceedings.mlr.press/v84/jung18a.html}, abstract = {We consider the multi-label ranking approach to multi-label learning. Boosting is a natural method for multi-label ranking as it aggregates weak predictions through majority votes, which can be directly used as scores to produce a ranking of the labels. We design online boosting algorithms with provable loss bounds for multi-label ranking. We show that our first algorithm is optimal in terms of the number of learners required to attain a desired accuracy, but it requires knowledge of the edge of the weak learners. We also design an adaptive algorithm that does not require this knowledge and is hence more practical. Experimental results on real data sets demonstrate that our algorithms are at least as good as existing batch boosting algorithms. } }
Endnote
%0 Conference Paper %T Online Boosting Algorithms for Multi-label Ranking %A Young Hun Jung %A Ambuj Tewari %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-jung18a %I PMLR %P 279--287 %U https://proceedings.mlr.press/v84/jung18a.html %V 84 %X We consider the multi-label ranking approach to multi-label learning. Boosting is a natural method for multi-label ranking as it aggregates weak predictions through majority votes, which can be directly used as scores to produce a ranking of the labels. We design online boosting algorithms with provable loss bounds for multi-label ranking. We show that our first algorithm is optimal in terms of the number of learners required to attain a desired accuracy, but it requires knowledge of the edge of the weak learners. We also design an adaptive algorithm that does not require this knowledge and is hence more practical. Experimental results on real data sets demonstrate that our algorithms are at least as good as existing batch boosting algorithms.
APA
Jung, Y.H. & Tewari, A.. (2018). Online Boosting Algorithms for Multi-label Ranking. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:279-287 Available from https://proceedings.mlr.press/v84/jung18a.html.

Related Material