Ranking by calibrated AdaBoost

[edit]

R. Busa-Fekete, B. Kégl, T. Éltető, G. Szarvas ;
Proceedings of the Learning to Rank Challenge, PMLR 14:37-48, 2011.

Abstract

This paper describes the ideas and methodologies that we used in the Yahoo learning-to-rank challenge^1. Our technique is essentially pointwise with a listwise touch at the last combination step. The main ingredients of our approach are 1) preprocessing (querywise normalization) 2) multi-class AdaBoost.MH 3) regression calibration, and 4) an exponentially weighted forecaster for model combination. In post-challenge analysis we found that preprocessing and training AdaBoost with a wide variety of hyperparameters improved individual models significantly, the final listwise ensemble step was crucial, whereas calibration helped only in creating diversity.

Related Material