Improving Predictive Specificity of Description Logic Learners by Fortification

An Tran, Jens Dietrich, Hans Guesgen, Stephen Marsland
Proceedings of the 5th Asian Conference on Machine Learning, PMLR 29:419-434, 2013.

Abstract

The predictive accuracy of a learning algorithm can be split into specificity and sensitivity, amongst other decompositions. Sensitivity, also known as completeness, is the ratio of true positives to the total number of positive examples, while specificity is the ratio of true negative to the total negative examples. In top-down learning methods of inductive logic programming, there is generally a bias towards sensitivity, since the learning starts from the most general rule (everything is positive) and specialises by excluding some of the negative examples. While this is often useful, it is not always the best choice: for example, in novelty detection, where the negative examples are rare and often varied, they may well be ignored by the learning. In this paper we introduce a method that attempts to remove the bias towards sensitivity by fortifying the model by computing and then including in the model some descriptions of the negative data even if they are considered redundant by the normal learning algorithm. We demonstrate the method on a set of standard datasets for description logic learning and show that the predictive accuracy increases.

Cite this Paper


BibTeX
@InProceedings{pmlr-v29-Tran13, title = {Improving Predictive Specificity of Description Logic Learners by Fortification}, author = {Tran, An and Dietrich, Jens and Guesgen, Hans and Marsland, Stephen}, booktitle = {Proceedings of the 5th Asian Conference on Machine Learning}, pages = {419--434}, year = {2013}, editor = {Ong, Cheng Soon and Ho, Tu Bao}, volume = {29}, series = {Proceedings of Machine Learning Research}, address = {Australian National University, Canberra, Australia}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v29/Tran13.pdf}, url = {https://proceedings.mlr.press/v29/Tran13.html}, abstract = {The predictive accuracy of a learning algorithm can be split into specificity and sensitivity, amongst other decompositions. Sensitivity, also known as completeness, is the ratio of true positives to the total number of positive examples, while specificity is the ratio of true negative to the total negative examples. In top-down learning methods of inductive logic programming, there is generally a bias towards sensitivity, since the learning starts from the most general rule (everything is positive) and specialises by excluding some of the negative examples. While this is often useful, it is not always the best choice: for example, in novelty detection, where the negative examples are rare and often varied, they may well be ignored by the learning. In this paper we introduce a method that attempts to remove the bias towards sensitivity by fortifying the model by computing and then including in the model some descriptions of the negative data even if they are considered redundant by the normal learning algorithm. We demonstrate the method on a set of standard datasets for description logic learning and show that the predictive accuracy increases.} }
Endnote
%0 Conference Paper %T Improving Predictive Specificity of Description Logic Learners by Fortification %A An Tran %A Jens Dietrich %A Hans Guesgen %A Stephen Marsland %B Proceedings of the 5th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Cheng Soon Ong %E Tu Bao Ho %F pmlr-v29-Tran13 %I PMLR %P 419--434 %U https://proceedings.mlr.press/v29/Tran13.html %V 29 %X The predictive accuracy of a learning algorithm can be split into specificity and sensitivity, amongst other decompositions. Sensitivity, also known as completeness, is the ratio of true positives to the total number of positive examples, while specificity is the ratio of true negative to the total negative examples. In top-down learning methods of inductive logic programming, there is generally a bias towards sensitivity, since the learning starts from the most general rule (everything is positive) and specialises by excluding some of the negative examples. While this is often useful, it is not always the best choice: for example, in novelty detection, where the negative examples are rare and often varied, they may well be ignored by the learning. In this paper we introduce a method that attempts to remove the bias towards sensitivity by fortifying the model by computing and then including in the model some descriptions of the negative data even if they are considered redundant by the normal learning algorithm. We demonstrate the method on a set of standard datasets for description logic learning and show that the predictive accuracy increases.
RIS
TY - CPAPER TI - Improving Predictive Specificity of Description Logic Learners by Fortification AU - An Tran AU - Jens Dietrich AU - Hans Guesgen AU - Stephen Marsland BT - Proceedings of the 5th Asian Conference on Machine Learning DA - 2013/10/21 ED - Cheng Soon Ong ED - Tu Bao Ho ID - pmlr-v29-Tran13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 29 SP - 419 EP - 434 L1 - http://proceedings.mlr.press/v29/Tran13.pdf UR - https://proceedings.mlr.press/v29/Tran13.html AB - The predictive accuracy of a learning algorithm can be split into specificity and sensitivity, amongst other decompositions. Sensitivity, also known as completeness, is the ratio of true positives to the total number of positive examples, while specificity is the ratio of true negative to the total negative examples. In top-down learning methods of inductive logic programming, there is generally a bias towards sensitivity, since the learning starts from the most general rule (everything is positive) and specialises by excluding some of the negative examples. While this is often useful, it is not always the best choice: for example, in novelty detection, where the negative examples are rare and often varied, they may well be ignored by the learning. In this paper we introduce a method that attempts to remove the bias towards sensitivity by fortifying the model by computing and then including in the model some descriptions of the negative data even if they are considered redundant by the normal learning algorithm. We demonstrate the method on a set of standard datasets for description logic learning and show that the predictive accuracy increases. ER -
APA
Tran, A., Dietrich, J., Guesgen, H. & Marsland, S.. (2013). Improving Predictive Specificity of Description Logic Learners by Fortification. Proceedings of the 5th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 29:419-434 Available from https://proceedings.mlr.press/v29/Tran13.html.

Related Material