Deep Dynamic Boosted Forest

Haixin Wang, Xingzhang Ren, Jinan Sun, Wei Ye, Long Chen, Muzhi Yu, Shikun Zhang
Proceedings of The 12th Asian Conference on Machine Learning, PMLR 129:257-272, 2020.

Abstract

Random forest is widely exploited as an ensemble learning method. In many practical applications, however, there is still a significant challenge to learn from imbalanced data. To alleviate this limitation, we propose a deep dynamic boosted forest (DDBF), a novel ensemble algorithm that incorporates the notion of hard example mining into random forest. Specifically, we propose to measure the quality of each leaf node of every decision tree in the random forest to determine hard examples. By iteratively training and then removing easy examples from training data, we evolve the random forest to focus on hard examples dynamically so as to balance the proportion of samples and learn decision boundaries better. Data can be cascaded through these random forests learned in each iteration in sequence to generate more accurate predictions. Our DDBF outperforms random forest on 5 UCI datasets, MNIST and SATIMAGE, and achieved state-of-the-art results compared to other deep models. Moreover, we show that DDBF is also a new way of sampling and can be very useful and efficient when learning from imbalanced data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v129-wang20a, title = {Deep Dynamic Boosted Forest}, author = {Wang, Haixin and Ren, Xingzhang and Sun, Jinan and Ye, Wei and Chen, Long and Yu, Muzhi and Zhang, Shikun}, booktitle = {Proceedings of The 12th Asian Conference on Machine Learning}, pages = {257--272}, year = {2020}, editor = {Pan, Sinno Jialin and Sugiyama, Masashi}, volume = {129}, series = {Proceedings of Machine Learning Research}, month = {18--20 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v129/wang20a/wang20a.pdf}, url = {https://proceedings.mlr.press/v129/wang20a.html}, abstract = {Random forest is widely exploited as an ensemble learning method. In many practical applications, however, there is still a significant challenge to learn from imbalanced data. To alleviate this limitation, we propose a deep dynamic boosted forest (DDBF), a novel ensemble algorithm that incorporates the notion of hard example mining into random forest. Specifically, we propose to measure the quality of each leaf node of every decision tree in the random forest to determine hard examples. By iteratively training and then removing easy examples from training data, we evolve the random forest to focus on hard examples dynamically so as to balance the proportion of samples and learn decision boundaries better. Data can be cascaded through these random forests learned in each iteration in sequence to generate more accurate predictions. Our DDBF outperforms random forest on 5 UCI datasets, MNIST and SATIMAGE, and achieved state-of-the-art results compared to other deep models. Moreover, we show that DDBF is also a new way of sampling and can be very useful and efficient when learning from imbalanced data.} }
Endnote
%0 Conference Paper %T Deep Dynamic Boosted Forest %A Haixin Wang %A Xingzhang Ren %A Jinan Sun %A Wei Ye %A Long Chen %A Muzhi Yu %A Shikun Zhang %B Proceedings of The 12th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Sinno Jialin Pan %E Masashi Sugiyama %F pmlr-v129-wang20a %I PMLR %P 257--272 %U https://proceedings.mlr.press/v129/wang20a.html %V 129 %X Random forest is widely exploited as an ensemble learning method. In many practical applications, however, there is still a significant challenge to learn from imbalanced data. To alleviate this limitation, we propose a deep dynamic boosted forest (DDBF), a novel ensemble algorithm that incorporates the notion of hard example mining into random forest. Specifically, we propose to measure the quality of each leaf node of every decision tree in the random forest to determine hard examples. By iteratively training and then removing easy examples from training data, we evolve the random forest to focus on hard examples dynamically so as to balance the proportion of samples and learn decision boundaries better. Data can be cascaded through these random forests learned in each iteration in sequence to generate more accurate predictions. Our DDBF outperforms random forest on 5 UCI datasets, MNIST and SATIMAGE, and achieved state-of-the-art results compared to other deep models. Moreover, we show that DDBF is also a new way of sampling and can be very useful and efficient when learning from imbalanced data.
APA
Wang, H., Ren, X., Sun, J., Ye, W., Chen, L., Yu, M. & Zhang, S.. (2020). Deep Dynamic Boosted Forest. Proceedings of The 12th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 129:257-272 Available from https://proceedings.mlr.press/v129/wang20a.html.

Related Material