[edit]
A Convergence Rate Analysis for LogitBoost, MART and Their Variant
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1251-1259, 2014.
Abstract
LogitBoost, MART and their variant can be viewed as additive tree regression using logistic loss and boosting style optimization. We analyze their convergence rates based on a new weak learnability formulation. We show that it has O(\frac1T) rate when using gradient descent only, while a linear rate is achieved when using Newton descent. Moreover, introducing Newton descent when growing the trees, as LogitBoost does, leads to a faster linear rate. Empirical results on UCI datasets support our analysis.