Learning with Domain Knowledge to Develop Justifiable Convolutional Networks

Rimmon Bhosale, Mrinal Das
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:64-79, 2023.

Abstract

The inherent structure of the Convolutional Neural Networks (CNN) allows them to extract features that are highly correlated with the classes while performing image classification. However, it may happen that the extracted features are merely coincidental and may not be justifiable from a human perspective. For example, from a set of images of cows on grassland, CNN can erroneously extract grass as the feature of the class cow. There are two main limitations to this kind of learning: firstly, in many false-negative cases, correct features will not be used, and secondly, in false-positive cases the system will lack accountability. There is no implicit way to inform CNN to learn the features that are justifiable from a human perspective to resolve these issues. In this paper, we argue that if we provide domain knowledge to guide the learning process of CNN, it is possible to reliably learn the justifiable features. We propose a systematic yet simple mechanism to incorporate domain knowledge to guide the learning process of the CNNs to extract justifiable features. The flip side is that it needs additional input. However, we have shown that even with minimal additional input our method can effectively propagate the knowledge within a class during training. We demonstrate that justifiable features not only enhance accuracy but also demand less amount of data and training time. Moreover, we also show that the proposed method is more robust against perturbational changes in the input images.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-bhosale23a, title = {Learning with Domain Knowledge to Develop Justifiable Convolutional Networks}, author = {Bhosale, Rimmon and Das, Mrinal}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {64--79}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/bhosale23a/bhosale23a.pdf}, url = {https://proceedings.mlr.press/v189/bhosale23a.html}, abstract = {The inherent structure of the Convolutional Neural Networks (CNN) allows them to extract features that are highly correlated with the classes while performing image classification. However, it may happen that the extracted features are merely coincidental and may not be justifiable from a human perspective. For example, from a set of images of cows on grassland, CNN can erroneously extract grass as the feature of the class cow. There are two main limitations to this kind of learning: firstly, in many false-negative cases, correct features will not be used, and secondly, in false-positive cases the system will lack accountability. There is no implicit way to inform CNN to learn the features that are justifiable from a human perspective to resolve these issues. In this paper, we argue that if we provide domain knowledge to guide the learning process of CNN, it is possible to reliably learn the justifiable features. We propose a systematic yet simple mechanism to incorporate domain knowledge to guide the learning process of the CNNs to extract justifiable features. The flip side is that it needs additional input. However, we have shown that even with minimal additional input our method can effectively propagate the knowledge within a class during training. We demonstrate that justifiable features not only enhance accuracy but also demand less amount of data and training time. Moreover, we also show that the proposed method is more robust against perturbational changes in the input images.} }
Endnote
%0 Conference Paper %T Learning with Domain Knowledge to Develop Justifiable Convolutional Networks %A Rimmon Bhosale %A Mrinal Das %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-bhosale23a %I PMLR %P 64--79 %U https://proceedings.mlr.press/v189/bhosale23a.html %V 189 %X The inherent structure of the Convolutional Neural Networks (CNN) allows them to extract features that are highly correlated with the classes while performing image classification. However, it may happen that the extracted features are merely coincidental and may not be justifiable from a human perspective. For example, from a set of images of cows on grassland, CNN can erroneously extract grass as the feature of the class cow. There are two main limitations to this kind of learning: firstly, in many false-negative cases, correct features will not be used, and secondly, in false-positive cases the system will lack accountability. There is no implicit way to inform CNN to learn the features that are justifiable from a human perspective to resolve these issues. In this paper, we argue that if we provide domain knowledge to guide the learning process of CNN, it is possible to reliably learn the justifiable features. We propose a systematic yet simple mechanism to incorporate domain knowledge to guide the learning process of the CNNs to extract justifiable features. The flip side is that it needs additional input. However, we have shown that even with minimal additional input our method can effectively propagate the knowledge within a class during training. We demonstrate that justifiable features not only enhance accuracy but also demand less amount of data and training time. Moreover, we also show that the proposed method is more robust against perturbational changes in the input images.
APA
Bhosale, R. & Das, M.. (2023). Learning with Domain Knowledge to Develop Justifiable Convolutional Networks. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:64-79 Available from https://proceedings.mlr.press/v189/bhosale23a.html.

Related Material