A Convergence Rate Analysis for LogitBoost, MART and Their Variant

Peng Sun, Tong Zhang, Jie Zhou
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1251-1259, 2014.

Abstract

LogitBoost, MART and their variant can be viewed as additive tree regression using logistic loss and boosting style optimization. We analyze their convergence rates based on a new weak learnability formulation. We show that it has O(\frac1T) rate when using gradient descent only, while a linear rate is achieved when using Newton descent. Moreover, introducing Newton descent when growing the trees, as LogitBoost does, leads to a faster linear rate. Empirical results on UCI datasets support our analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-sunc14, title = {A Convergence Rate Analysis for LogitBoost, MART and Their Variant}, author = {Sun, Peng and Zhang, Tong and Zhou, Jie}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1251--1259}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/sunc14.pdf}, url = {https://proceedings.mlr.press/v32/sunc14.html}, abstract = {LogitBoost, MART and their variant can be viewed as additive tree regression using logistic loss and boosting style optimization. We analyze their convergence rates based on a new weak learnability formulation. We show that it has O(\frac1T) rate when using gradient descent only, while a linear rate is achieved when using Newton descent. Moreover, introducing Newton descent when growing the trees, as LogitBoost does, leads to a faster linear rate. Empirical results on UCI datasets support our analysis.} }
Endnote
%0 Conference Paper %T A Convergence Rate Analysis for LogitBoost, MART and Their Variant %A Peng Sun %A Tong Zhang %A Jie Zhou %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-sunc14 %I PMLR %P 1251--1259 %U https://proceedings.mlr.press/v32/sunc14.html %V 32 %N 2 %X LogitBoost, MART and their variant can be viewed as additive tree regression using logistic loss and boosting style optimization. We analyze their convergence rates based on a new weak learnability formulation. We show that it has O(\frac1T) rate when using gradient descent only, while a linear rate is achieved when using Newton descent. Moreover, introducing Newton descent when growing the trees, as LogitBoost does, leads to a faster linear rate. Empirical results on UCI datasets support our analysis.
RIS
TY - CPAPER TI - A Convergence Rate Analysis for LogitBoost, MART and Their Variant AU - Peng Sun AU - Tong Zhang AU - Jie Zhou BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-sunc14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1251 EP - 1259 L1 - http://proceedings.mlr.press/v32/sunc14.pdf UR - https://proceedings.mlr.press/v32/sunc14.html AB - LogitBoost, MART and their variant can be viewed as additive tree regression using logistic loss and boosting style optimization. We analyze their convergence rates based on a new weak learnability formulation. We show that it has O(\frac1T) rate when using gradient descent only, while a linear rate is achieved when using Newton descent. Moreover, introducing Newton descent when growing the trees, as LogitBoost does, leads to a faster linear rate. Empirical results on UCI datasets support our analysis. ER -
APA
Sun, P., Zhang, T. & Zhou, J.. (2014). A Convergence Rate Analysis for LogitBoost, MART and Their Variant. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1251-1259 Available from https://proceedings.mlr.press/v32/sunc14.html.

Related Material