A Study on Affect Model Validity: Nominal vs Ordinal Labels

David Melhart, Konstantinos Sfikas, Giorgos Giannakakis, Georgios Yannakakis Antonios Liapis
Proceedings of IJCAI 2018 2nd Workshop on Artificial Intelligence in Affective Computing, PMLR 86:27-34, 2020.

Abstract

The question of representing emotion computationally remains largely unanswered: popular approaches require annotators to assign a magnitude (or a class) of some emotional dimension, while an alternative is to focus on the relationship between two or more options. Recent evidence in affective computing suggests that following a methodology of ordinal annotations and processing leads to better reliability and validity of the model. This paper compares the generality of classification methods versus preference learning methods in predicting the levels of arousal in two widely used affective datasets. Findings of this initial study further validate the hypothesis that approaching affect labels as ordinal data and building models via preference learning yields models of better validity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v86-melhart20a, title = {A Study on Affect Model Validity: Nominal vs Ordinal Labels}, author = {Melhart, David and Sfikas, Konstantinos and Giannakakis, Giorgos and Liapis, Georgios Yannakakis Antonios}, booktitle = {Proceedings of IJCAI 2018 2nd Workshop on Artificial Intelligence in Affective Computing}, pages = {27--34}, year = {2020}, editor = {Hsu, William and Yates, Heath}, volume = {86}, series = {Proceedings of Machine Learning Research}, month = {15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v86/melhart20a/melhart20a.pdf}, url = {http://proceedings.mlr.press/v86/melhart20a.html}, abstract = {The question of representing emotion computationally remains largely unanswered: popular approaches require annotators to assign a magnitude (or a class) of some emotional dimension, while an alternative is to focus on the relationship between two or more options. Recent evidence in affective computing suggests that following a methodology of ordinal annotations and processing leads to better reliability and validity of the model. This paper compares the generality of classification methods versus preference learning methods in predicting the levels of arousal in two widely used affective datasets. Findings of this initial study further validate the hypothesis that approaching affect labels as ordinal data and building models via preference learning yields models of better validity.} }
Endnote
%0 Conference Paper %T A Study on Affect Model Validity: Nominal vs Ordinal Labels %A David Melhart %A Konstantinos Sfikas %A Giorgos Giannakakis %A Georgios Yannakakis Antonios Liapis %B Proceedings of IJCAI 2018 2nd Workshop on Artificial Intelligence in Affective Computing %C Proceedings of Machine Learning Research %D 2020 %E William Hsu %E Heath Yates %F pmlr-v86-melhart20a %I PMLR %P 27--34 %U http://proceedings.mlr.press/v86/melhart20a.html %V 86 %X The question of representing emotion computationally remains largely unanswered: popular approaches require annotators to assign a magnitude (or a class) of some emotional dimension, while an alternative is to focus on the relationship between two or more options. Recent evidence in affective computing suggests that following a methodology of ordinal annotations and processing leads to better reliability and validity of the model. This paper compares the generality of classification methods versus preference learning methods in predicting the levels of arousal in two widely used affective datasets. Findings of this initial study further validate the hypothesis that approaching affect labels as ordinal data and building models via preference learning yields models of better validity.
APA
Melhart, D., Sfikas, K., Giannakakis, G. & Liapis, G.Y.A.. (2020). A Study on Affect Model Validity: Nominal vs Ordinal Labels. Proceedings of IJCAI 2018 2nd Workshop on Artificial Intelligence in Affective Computing, in Proceedings of Machine Learning Research 86:27-34 Available from http://proceedings.mlr.press/v86/melhart20a.html.

Related Material