Stacked-MLkNN: A stacking based improvement to Multi-Label k-Nearest Neighbours

Arjun Pakrashi, Brian Mac Namee
Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications, PMLR 74:51-63, 2017.

Abstract

Multi-label classification deals with problems where each datapoint can be assigned to more than one class, or label, at the same time. The simplest approach for such problems is to train independent binary classification models for each label and use these models to independently predict a set of relevant labels for a datapoint. MLkNN is an instance-based lazy learning algorithm for multi-label classification that takes this approach. MLkNN, and similar algorithms, however, do not exploit associations which may exist between the set of potential labels. These methods also suffer from imbalance in the frequency of labels in a training dataset. This work attempts to improve the predictions of MLkNN by implementing a two-layer stack-like method, Stacked-MLkNN which exploits the label associations. Experiments show that Stacked-MLkNN produces better predictions than MLkNN and several other state-of-the-art instance-based learning algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v74-pakrashi17a, title = {Stacked-MLkNN: A stacking based improvement to Multi-Label k-Nearest Neighbours}, author = {Pakrashi, Arjun and Mac Namee, Brian}, booktitle = {Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications}, pages = {51--63}, year = {2017}, editor = {Luís Torgo, Paula Branco and Moniz, Nuno}, volume = {74}, series = {Proceedings of Machine Learning Research}, month = {22 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v74/pakrashi17a/pakrashi17a.pdf}, url = {https://proceedings.mlr.press/v74/pakrashi17a.html}, abstract = {Multi-label classification deals with problems where each datapoint can be assigned to more than one class, or label, at the same time. The simplest approach for such problems is to train independent binary classification models for each label and use these models to independently predict a set of relevant labels for a datapoint. MLkNN is an instance-based lazy learning algorithm for multi-label classification that takes this approach. MLkNN, and similar algorithms, however, do not exploit associations which may exist between the set of potential labels. These methods also suffer from imbalance in the frequency of labels in a training dataset. This work attempts to improve the predictions of MLkNN by implementing a two-layer stack-like method, Stacked-MLkNN which exploits the label associations. Experiments show that Stacked-MLkNN produces better predictions than MLkNN and several other state-of-the-art instance-based learning algorithms.} }
Endnote
%0 Conference Paper %T Stacked-MLkNN: A stacking based improvement to Multi-Label k-Nearest Neighbours %A Arjun Pakrashi %A Brian Mac Namee %B Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications %C Proceedings of Machine Learning Research %D 2017 %E Paula Branco Luís Torgo %E Nuno Moniz %F pmlr-v74-pakrashi17a %I PMLR %P 51--63 %U https://proceedings.mlr.press/v74/pakrashi17a.html %V 74 %X Multi-label classification deals with problems where each datapoint can be assigned to more than one class, or label, at the same time. The simplest approach for such problems is to train independent binary classification models for each label and use these models to independently predict a set of relevant labels for a datapoint. MLkNN is an instance-based lazy learning algorithm for multi-label classification that takes this approach. MLkNN, and similar algorithms, however, do not exploit associations which may exist between the set of potential labels. These methods also suffer from imbalance in the frequency of labels in a training dataset. This work attempts to improve the predictions of MLkNN by implementing a two-layer stack-like method, Stacked-MLkNN which exploits the label associations. Experiments show that Stacked-MLkNN produces better predictions than MLkNN and several other state-of-the-art instance-based learning algorithms.
APA
Pakrashi, A. & Mac Namee, B.. (2017). Stacked-MLkNN: A stacking based improvement to Multi-Label k-Nearest Neighbours. Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications, in Proceedings of Machine Learning Research 74:51-63 Available from https://proceedings.mlr.press/v74/pakrashi17a.html.

Related Material