Ranking by calibrated AdaBoost

Róbert Busa-Fekete, Balázs Kégl, Tamás Éltető, György Szarvas
Proceedings of the Learning to Rank Challenge, PMLR 14:37-48, 2011.

Abstract

This paper describes the ideas and methodologies that we used in the Yahoo learning-to-rank challenge^1. Our technique is essentially pointwise with a listwise touch at the last combination step. The main ingredients of our approach are 1) preprocessing (querywise normalization) 2) multi-class AdaBoost.MH 3) regression calibration, and 4) an exponentially weighted forecaster for model combination. In post-challenge analysis we found that preprocessing and training AdaBoost with a wide variety of hyperparameters improved individual models significantly, the final listwise ensemble step was crucial, whereas calibration helped only in creating diversity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v14-busa-fekete11a, title = {Ranking by calibrated AdaBoost}, author = {Busa-Fekete, Róbert and Kégl, Balázs and Éltető, Tamás and Szarvas, György}, booktitle = {Proceedings of the Learning to Rank Challenge}, pages = {37--48}, year = {2011}, editor = {Chapelle, Olivier and Chang, Yi and Liu, Tie-Yan}, volume = {14}, series = {Proceedings of Machine Learning Research}, address = {Haifa, Israel}, month = {25 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v14/busa-fekete11a/busa-fekete11a.pdf}, url = {https://proceedings.mlr.press/v14/busa-fekete11a.html}, abstract = {This paper describes the ideas and methodologies that we used in the Yahoo learning-to-rank challenge^1. Our technique is essentially pointwise with a listwise touch at the last combination step. The main ingredients of our approach are 1) preprocessing (querywise normalization) 2) multi-class AdaBoost.MH 3) regression calibration, and 4) an exponentially weighted forecaster for model combination. In post-challenge analysis we found that preprocessing and training AdaBoost with a wide variety of hyperparameters improved individual models significantly, the final listwise ensemble step was crucial, whereas calibration helped only in creating diversity.} }
Endnote
%0 Conference Paper %T Ranking by calibrated AdaBoost %A Róbert Busa-Fekete %A Balázs Kégl %A Tamás Éltető %A György Szarvas %B Proceedings of the Learning to Rank Challenge %C Proceedings of Machine Learning Research %D 2011 %E Olivier Chapelle %E Yi Chang %E Tie-Yan Liu %F pmlr-v14-busa-fekete11a %I PMLR %P 37--48 %U https://proceedings.mlr.press/v14/busa-fekete11a.html %V 14 %X This paper describes the ideas and methodologies that we used in the Yahoo learning-to-rank challenge^1. Our technique is essentially pointwise with a listwise touch at the last combination step. The main ingredients of our approach are 1) preprocessing (querywise normalization) 2) multi-class AdaBoost.MH 3) regression calibration, and 4) an exponentially weighted forecaster for model combination. In post-challenge analysis we found that preprocessing and training AdaBoost with a wide variety of hyperparameters improved individual models significantly, the final listwise ensemble step was crucial, whereas calibration helped only in creating diversity.
RIS
TY - CPAPER TI - Ranking by calibrated AdaBoost AU - Róbert Busa-Fekete AU - Balázs Kégl AU - Tamás Éltető AU - György Szarvas BT - Proceedings of the Learning to Rank Challenge DA - 2011/01/26 ED - Olivier Chapelle ED - Yi Chang ED - Tie-Yan Liu ID - pmlr-v14-busa-fekete11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 14 SP - 37 EP - 48 L1 - http://proceedings.mlr.press/v14/busa-fekete11a/busa-fekete11a.pdf UR - https://proceedings.mlr.press/v14/busa-fekete11a.html AB - This paper describes the ideas and methodologies that we used in the Yahoo learning-to-rank challenge^1. Our technique is essentially pointwise with a listwise touch at the last combination step. The main ingredients of our approach are 1) preprocessing (querywise normalization) 2) multi-class AdaBoost.MH 3) regression calibration, and 4) an exponentially weighted forecaster for model combination. In post-challenge analysis we found that preprocessing and training AdaBoost with a wide variety of hyperparameters improved individual models significantly, the final listwise ensemble step was crucial, whereas calibration helped only in creating diversity. ER -
APA
Busa-Fekete, R., Kégl, B., Éltető, T. & Szarvas, G.. (2011). Ranking by calibrated AdaBoost. Proceedings of the Learning to Rank Challenge, in Proceedings of Machine Learning Research 14:37-48 Available from https://proceedings.mlr.press/v14/busa-fekete11a.html.

Related Material