The Curious Case of Stacking Boosted Relational Dependency Networks

Siwen Yan, Devendra Singh Dhami, Sriraam Natarajan
Proceedings on "I Can't Believe It's Not Better!" at NeurIPS Workshops, PMLR 137:33-42, 2020.

Abstract

Reducing bias while learning and inference is an important requirement to achieve generalizable and better performing models. The method of stacking took the first step towards creating such models by reducing inference bias but the question of combining stacking with a model that reduces learning bias is still largely unanswered. In statistical relational learning, ensemble models of relational trees such as boosted relational dependency networks (RDN-Boost) are shown to reduce the learning bias. We combine RDN-Boost and stacking methods with the aim of reducing both learning and inference bias subsequently resulting in better overall performance. However, our evaluation on three relational data sets shows no significant performance improvement over the baseline models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v137-yan20a, title = {The Curious Case of Stacking Boosted Relational Dependency Networks}, author = {Yan, Siwen and Dhami, Devendra Singh and Natarajan, Sriraam}, booktitle = {Proceedings on "I Can't Believe It's Not Better!" at NeurIPS Workshops}, pages = {33--42}, year = {2020}, editor = {Zosa Forde, Jessica and Ruiz, Francisco and Pradier, Melanie F. and Schein, Aaron}, volume = {137}, series = {Proceedings of Machine Learning Research}, month = {12 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v137/yan20a/yan20a.pdf}, url = {https://proceedings.mlr.press/v137/yan20a.html}, abstract = {Reducing bias while learning and inference is an important requirement to achieve generalizable and better performing models. The method of stacking took the first step towards creating such models by reducing inference bias but the question of combining stacking with a model that reduces learning bias is still largely unanswered. In statistical relational learning, ensemble models of relational trees such as boosted relational dependency networks (RDN-Boost) are shown to reduce the learning bias. We combine RDN-Boost and stacking methods with the aim of reducing both learning and inference bias subsequently resulting in better overall performance. However, our evaluation on three relational data sets shows no significant performance improvement over the baseline models.} }
Endnote
%0 Conference Paper %T The Curious Case of Stacking Boosted Relational Dependency Networks %A Siwen Yan %A Devendra Singh Dhami %A Sriraam Natarajan %B Proceedings on "I Can't Believe It's Not Better!" at NeurIPS Workshops %C Proceedings of Machine Learning Research %D 2020 %E Jessica Zosa Forde %E Francisco Ruiz %E Melanie F. Pradier %E Aaron Schein %F pmlr-v137-yan20a %I PMLR %P 33--42 %U https://proceedings.mlr.press/v137/yan20a.html %V 137 %X Reducing bias while learning and inference is an important requirement to achieve generalizable and better performing models. The method of stacking took the first step towards creating such models by reducing inference bias but the question of combining stacking with a model that reduces learning bias is still largely unanswered. In statistical relational learning, ensemble models of relational trees such as boosted relational dependency networks (RDN-Boost) are shown to reduce the learning bias. We combine RDN-Boost and stacking methods with the aim of reducing both learning and inference bias subsequently resulting in better overall performance. However, our evaluation on three relational data sets shows no significant performance improvement over the baseline models.
APA
Yan, S., Dhami, D.S. & Natarajan, S.. (2020). The Curious Case of Stacking Boosted Relational Dependency Networks. Proceedings on "I Can't Believe It's Not Better!" at NeurIPS Workshops, in Proceedings of Machine Learning Research 137:33-42 Available from https://proceedings.mlr.press/v137/yan20a.html.

Related Material