Stacked-MLkNN: A stacking based improvement to Multi-Label k-Nearest Neighbours

[edit]

Arjun Pakrashi, Brian Mac Namee ;
Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications, PMLR 74:51-63, 2017.

Abstract

Multi-label classification deals with problems where each datapoint can be assigned to more than one class, or label, at the same time. The simplest approach for such problems is to train independent binary classification models for each label and use these models to independently predict a set of relevant labels for a datapoint. MLkNN is an instance-based lazy learning algorithm for multi-label classification that takes this approach. MLkNN, and similar algorithms, however, do not exploit associations which may exist between the set of potential labels. These methods also suffer from imbalance in the frequency of labels in a training dataset. This work attempts to improve the predictions of MLkNN by implementing a two-layer stack-like method, Stacked-MLkNN which exploits the label associations. Experiments show that Stacked-MLkNN produces better predictions than MLkNN and several other state-of-the-art instance-based learning algorithms.

Related Material