Deep Boosting

Corinna Cortes, Mehryar Mohri, Umar Syed
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1179-1187, 2014.

Abstract

We present a new ensemble learning algorithm, DeepBoost, which can use as base classifiers a hypothesis set containing deep decision trees, or members of other rich or complex families, and succeed in achieving high accuracy without overfitting the data. The key to the success of the algorithm is a ‘capacity-conscious’ criterion for the selection of the hypotheses. We give new data-dependent learning bounds for convex ensembles expressed in terms of the Rademacher complexities of the sub-families composing the base classifier set, and the mixture weight assigned to each sub-family. Our algorithm directly benefits from these guarantees since it seeks to minimize the corresponding learning bound. We give a full description of our algorithm, including the details of its derivation, and report the results of several experiments showing that its performance compares favorably to that of AdaBoost and Logistic Regression and their L_1-regularized variants.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-cortesb14, title = {Deep Boosting}, author = {Cortes, Corinna and Mohri, Mehryar and Syed, Umar}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1179--1187}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/cortesb14.pdf}, url = {https://proceedings.mlr.press/v32/cortesb14.html}, abstract = {We present a new ensemble learning algorithm, DeepBoost, which can use as base classifiers a hypothesis set containing deep decision trees, or members of other rich or complex families, and succeed in achieving high accuracy without overfitting the data. The key to the success of the algorithm is a ‘capacity-conscious’ criterion for the selection of the hypotheses. We give new data-dependent learning bounds for convex ensembles expressed in terms of the Rademacher complexities of the sub-families composing the base classifier set, and the mixture weight assigned to each sub-family. Our algorithm directly benefits from these guarantees since it seeks to minimize the corresponding learning bound. We give a full description of our algorithm, including the details of its derivation, and report the results of several experiments showing that its performance compares favorably to that of AdaBoost and Logistic Regression and their L_1-regularized variants.} }
Endnote
%0 Conference Paper %T Deep Boosting %A Corinna Cortes %A Mehryar Mohri %A Umar Syed %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-cortesb14 %I PMLR %P 1179--1187 %U https://proceedings.mlr.press/v32/cortesb14.html %V 32 %N 2 %X We present a new ensemble learning algorithm, DeepBoost, which can use as base classifiers a hypothesis set containing deep decision trees, or members of other rich or complex families, and succeed in achieving high accuracy without overfitting the data. The key to the success of the algorithm is a ‘capacity-conscious’ criterion for the selection of the hypotheses. We give new data-dependent learning bounds for convex ensembles expressed in terms of the Rademacher complexities of the sub-families composing the base classifier set, and the mixture weight assigned to each sub-family. Our algorithm directly benefits from these guarantees since it seeks to minimize the corresponding learning bound. We give a full description of our algorithm, including the details of its derivation, and report the results of several experiments showing that its performance compares favorably to that of AdaBoost and Logistic Regression and their L_1-regularized variants.
RIS
TY - CPAPER TI - Deep Boosting AU - Corinna Cortes AU - Mehryar Mohri AU - Umar Syed BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-cortesb14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1179 EP - 1187 L1 - http://proceedings.mlr.press/v32/cortesb14.pdf UR - https://proceedings.mlr.press/v32/cortesb14.html AB - We present a new ensemble learning algorithm, DeepBoost, which can use as base classifiers a hypothesis set containing deep decision trees, or members of other rich or complex families, and succeed in achieving high accuracy without overfitting the data. The key to the success of the algorithm is a ‘capacity-conscious’ criterion for the selection of the hypotheses. We give new data-dependent learning bounds for convex ensembles expressed in terms of the Rademacher complexities of the sub-families composing the base classifier set, and the mixture weight assigned to each sub-family. Our algorithm directly benefits from these guarantees since it seeks to minimize the corresponding learning bound. We give a full description of our algorithm, including the details of its derivation, and report the results of several experiments showing that its performance compares favorably to that of AdaBoost and Logistic Regression and their L_1-regularized variants. ER -
APA
Cortes, C., Mohri, M. & Syed, U.. (2014). Deep Boosting. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1179-1187 Available from https://proceedings.mlr.press/v32/cortesb14.html.

Related Material