[edit]
Hybrid Concept-based Models: Using Concepts to Improve Neural Networks’ Accuracy
Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL), PMLR 307:307-318, 2026.
Abstract
Most datasets used for supervised machine learning consist of a single label per data point. However, in cases where more information than just the class label is available, would it be possible to train models more efficiently? We introduce two novel model architectures, which we call \emph{hybrid concept-based models}, that train using both class labels and additional information in the dataset referred to as \emph{concepts}. In order to thoroughly assess their performance, we introduce \emph{ConceptShapes}, an open and flexible class of datasets with concept labels. We show that the hybrid concept-based models can outperform standard computer vision models and previously proposed concept-based models with respect to accuracy. We also introduce an algorithm for performing \emph{adversarial concept attacks}, where an image is perturbed in a way that does not change a concept-based model’s concept predictions, but changes the class prediction. The existence of such adversarial examples raises questions about the interpretable qualities promised by concept-based models.