Learning Multiple Relational Rule-based Models

Kamal M. Ali, Clifford Brunk, Michael J. Pazzani
Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics, PMLR R0:8-14, 1995.

Abstract

We present a method for learning multiple relational models for each class in the data. Bayesian probability theory offers an optimal strategy for combining classifications of the individual concept descriptions. Here we use a tractable approximation to that theory. Previous work in learning multiple models has been in the attribute-value realm. We show that stochastically learning multiple relational (first-order) models consisting of a ruleset for each class also yields gains in accuracy when compared to the accuracy of a single deterministically learned relational model. In addition we show that learning multiple models is most helpful when the hypothesis space is "flat" with respect to the gain metric used in learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR0-ali95a, title = {Learning Multiple Relational Rule-based Models}, author = {Ali, Kamal M. and Brunk, Clifford and Pazzani, Michael J.}, booktitle = {Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics}, pages = {8--14}, year = {1995}, editor = {Fisher, Doug and Lenz, Hans-Joachim}, volume = {R0}, series = {Proceedings of Machine Learning Research}, month = {04--07 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/r0/ali95a/ali95a.pdf}, url = {https://proceedings.mlr.press/r0/ali95a.html}, abstract = {We present a method for learning multiple relational models for each class in the data. Bayesian probability theory offers an optimal strategy for combining classifications of the individual concept descriptions. Here we use a tractable approximation to that theory. Previous work in learning multiple models has been in the attribute-value realm. We show that stochastically learning multiple relational (first-order) models consisting of a ruleset for each class also yields gains in accuracy when compared to the accuracy of a single deterministically learned relational model. In addition we show that learning multiple models is most helpful when the hypothesis space is "flat" with respect to the gain metric used in learning.}, note = {Reissued by PMLR on 01 May 2022.} }
Endnote
%0 Conference Paper %T Learning Multiple Relational Rule-based Models %A Kamal M. Ali %A Clifford Brunk %A Michael J. Pazzani %B Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 1995 %E Doug Fisher %E Hans-Joachim Lenz %F pmlr-vR0-ali95a %I PMLR %P 8--14 %U https://proceedings.mlr.press/r0/ali95a.html %V R0 %X We present a method for learning multiple relational models for each class in the data. Bayesian probability theory offers an optimal strategy for combining classifications of the individual concept descriptions. Here we use a tractable approximation to that theory. Previous work in learning multiple models has been in the attribute-value realm. We show that stochastically learning multiple relational (first-order) models consisting of a ruleset for each class also yields gains in accuracy when compared to the accuracy of a single deterministically learned relational model. In addition we show that learning multiple models is most helpful when the hypothesis space is "flat" with respect to the gain metric used in learning. %Z Reissued by PMLR on 01 May 2022.
APA
Ali, K.M., Brunk, C. & Pazzani, M.J.. (1995). Learning Multiple Relational Rule-based Models. Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research R0:8-14 Available from https://proceedings.mlr.press/r0/ali95a.html. Reissued by PMLR on 01 May 2022.

Related Material