Learning to rank with extremely randomized trees

Pierre Geurts, Gilles Louppe
Proceedings of the Learning to Rank Challenge, PMLR 14:49-61, 2011.

Abstract

In this paper, we report on our experiments on the Yahoo! Labs Learning to Rank challenge organized in the context of the 23rd International Conference of Machine Learning (ICML 2010). We competed in both the learning to rank and the transfer learning tracks of the challenge with several tree-based ensemble methods, including Tree Bagging (?), Random Forests (?), and Extremely Randomized Trees (?). Our methods ranked 10th in the first track and 4th in the second track. Although not at the very top of the ranking, our results show that ensembles of randomized trees are quite competitive for the “learning to rank” problem. The paper also analyzes computing times of our algorithms and presents some post-challenge experiments with transfer learning methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v14-geurts11a, title = {Learning to rank with extremely randomized trees}, author = {Geurts, Pierre and Louppe, Gilles}, booktitle = {Proceedings of the Learning to Rank Challenge}, pages = {49--61}, year = {2011}, editor = {Chapelle, Olivier and Chang, Yi and Liu, Tie-Yan}, volume = {14}, series = {Proceedings of Machine Learning Research}, address = {Haifa, Israel}, month = {25 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v14/geurts11a/geurts11a.pdf}, url = {https://proceedings.mlr.press/v14/geurts11a.html}, abstract = {In this paper, we report on our experiments on the Yahoo! Labs Learning to Rank challenge organized in the context of the 23rd International Conference of Machine Learning (ICML 2010). We competed in both the learning to rank and the transfer learning tracks of the challenge with several tree-based ensemble methods, including Tree Bagging (?), Random Forests (?), and Extremely Randomized Trees (?). Our methods ranked 10th in the first track and 4th in the second track. Although not at the very top of the ranking, our results show that ensembles of randomized trees are quite competitive for the “learning to rank” problem. The paper also analyzes computing times of our algorithms and presents some post-challenge experiments with transfer learning methods.} }
Endnote
%0 Conference Paper %T Learning to rank with extremely randomized trees %A Pierre Geurts %A Gilles Louppe %B Proceedings of the Learning to Rank Challenge %C Proceedings of Machine Learning Research %D 2011 %E Olivier Chapelle %E Yi Chang %E Tie-Yan Liu %F pmlr-v14-geurts11a %I PMLR %P 49--61 %U https://proceedings.mlr.press/v14/geurts11a.html %V 14 %X In this paper, we report on our experiments on the Yahoo! Labs Learning to Rank challenge organized in the context of the 23rd International Conference of Machine Learning (ICML 2010). We competed in both the learning to rank and the transfer learning tracks of the challenge with several tree-based ensemble methods, including Tree Bagging (?), Random Forests (?), and Extremely Randomized Trees (?). Our methods ranked 10th in the first track and 4th in the second track. Although not at the very top of the ranking, our results show that ensembles of randomized trees are quite competitive for the “learning to rank” problem. The paper also analyzes computing times of our algorithms and presents some post-challenge experiments with transfer learning methods.
RIS
TY - CPAPER TI - Learning to rank with extremely randomized trees AU - Pierre Geurts AU - Gilles Louppe BT - Proceedings of the Learning to Rank Challenge DA - 2011/01/26 ED - Olivier Chapelle ED - Yi Chang ED - Tie-Yan Liu ID - pmlr-v14-geurts11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 14 SP - 49 EP - 61 L1 - http://proceedings.mlr.press/v14/geurts11a/geurts11a.pdf UR - https://proceedings.mlr.press/v14/geurts11a.html AB - In this paper, we report on our experiments on the Yahoo! Labs Learning to Rank challenge organized in the context of the 23rd International Conference of Machine Learning (ICML 2010). We competed in both the learning to rank and the transfer learning tracks of the challenge with several tree-based ensemble methods, including Tree Bagging (?), Random Forests (?), and Extremely Randomized Trees (?). Our methods ranked 10th in the first track and 4th in the second track. Although not at the very top of the ranking, our results show that ensembles of randomized trees are quite competitive for the “learning to rank” problem. The paper also analyzes computing times of our algorithms and presents some post-challenge experiments with transfer learning methods. ER -
APA
Geurts, P. & Louppe, G.. (2011). Learning to rank with extremely randomized trees. Proceedings of the Learning to Rank Challenge, in Proceedings of Machine Learning Research 14:49-61 Available from https://proceedings.mlr.press/v14/geurts11a.html.

Related Material