Non-Linear Gradient Boosting for Class-Imbalance Learning

[edit]

Jordan Frery, Amaury Habrard, Marc Sebban, Liyun He-Guelton ;
Proceedings of the Second International Workshop on Learning with Imbalanced Domains: Theory and Applications, PMLR 94:38-51, 2018.

Abstract

Gradient boosting relies on linearly combining diverse and weak hypotheses to build a strong classifier. In the class imbalance setting, boosting algorithms often require many hypotheses which tend to be more complex and may increase the risk of overfitting. We propose in this paper to address this issue by adapting the gradient boosting framework to a non-linear setting. In order to learn the idiosyncrasies of the target concept and prevent the algorithm from being biased toward the majority class, we suggest to jointly learn different combinations of the same set of very weak classifiers and expand the expressiveness of the final model by leveraging their non-linear complementarity. We perform an extensive experimental study using decision trees and show that, while requiring much less weak learners with a lower complexity (fewer splits per tree), our model outperforms standard linear gradient boosting.

Related Material