Meta–Gradient Boosted Decision Tree Model for Weight and Target Learning

[edit]

Yury Ustinovskiy, Valentina Fedorova, Gleb Gusev, Pavel Serdyukov ;
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2692-2701, 2016.

Abstract

Labeled training data is an essential part of any supervised machine learning framework. In practice, there is a trade-off between the quality of a label and its cost. In this paper, we consider a problem of learning to rank on a large-scale dataset with low-quality relevance labels aiming at maximizing the quality of a trained ranker on a small validation dataset with high-quality ground truth relevance labels. Motivated by the classical Gauss-Markov theorem for the linear regression problem, we formulate the problems of (1) reweighting training instances and (2) remapping learning targets. We propose meta–gradient decision tree learning framework for optimizing weight and target functions by applying gradient-based hyperparameter optimization. Experiments on a large-scale real-world dataset demonstrate that we can significantly improve state-of-the-art machine-learning algorithms by incorporating our framework.

Related Material