False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking

QianQian Xu, Jiechao Xiong, Xiaochun Cao, Yuan Yao
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1282-1291, 2016.

Abstract

With the rapid growth of crowdsourcing platforms it has become easy and relatively inexpensive to collect a dataset labeled by multiple annotators in a short time. However due to the lack of control over the quality of the annotators, some abnormal annotators may be affected by position bias which can potentially degrade the quality of the final consensus labels. In this paper we introduce a statistical framework to model and detect annotator’s position bias in order to control the false discovery rate (FDR) without a prior knowledge on the amount of biased annotators–the expected fraction of false discoveries among all discoveries being not too high, in order to assure that most of the discoveries are indeed true and replicable. The key technical development relies on some new knockoff filters adapted to our problem and new algorithms based on the Inverse Scale Space dynamics whose discretization is potentially suitable for large scale crowdsourcing data analysis. Our studies are supported by experiments with both simulated examples and real-world data. The proposed framework provides us a useful tool for quantitatively studying annotator’s abnormal behavior in crowdsourcing.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-xua16, title = {False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking}, author = {Xu, QianQian and Xiong, Jiechao and Cao, Xiaochun and Yao, Yuan}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1282--1291}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/xua16.pdf}, url = {https://proceedings.mlr.press/v48/xua16.html}, abstract = {With the rapid growth of crowdsourcing platforms it has become easy and relatively inexpensive to collect a dataset labeled by multiple annotators in a short time. However due to the lack of control over the quality of the annotators, some abnormal annotators may be affected by position bias which can potentially degrade the quality of the final consensus labels. In this paper we introduce a statistical framework to model and detect annotator’s position bias in order to control the false discovery rate (FDR) without a prior knowledge on the amount of biased annotators–the expected fraction of false discoveries among all discoveries being not too high, in order to assure that most of the discoveries are indeed true and replicable. The key technical development relies on some new knockoff filters adapted to our problem and new algorithms based on the Inverse Scale Space dynamics whose discretization is potentially suitable for large scale crowdsourcing data analysis. Our studies are supported by experiments with both simulated examples and real-world data. The proposed framework provides us a useful tool for quantitatively studying annotator’s abnormal behavior in crowdsourcing.} }
Endnote
%0 Conference Paper %T False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking %A QianQian Xu %A Jiechao Xiong %A Xiaochun Cao %A Yuan Yao %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-xua16 %I PMLR %P 1282--1291 %U https://proceedings.mlr.press/v48/xua16.html %V 48 %X With the rapid growth of crowdsourcing platforms it has become easy and relatively inexpensive to collect a dataset labeled by multiple annotators in a short time. However due to the lack of control over the quality of the annotators, some abnormal annotators may be affected by position bias which can potentially degrade the quality of the final consensus labels. In this paper we introduce a statistical framework to model and detect annotator’s position bias in order to control the false discovery rate (FDR) without a prior knowledge on the amount of biased annotators–the expected fraction of false discoveries among all discoveries being not too high, in order to assure that most of the discoveries are indeed true and replicable. The key technical development relies on some new knockoff filters adapted to our problem and new algorithms based on the Inverse Scale Space dynamics whose discretization is potentially suitable for large scale crowdsourcing data analysis. Our studies are supported by experiments with both simulated examples and real-world data. The proposed framework provides us a useful tool for quantitatively studying annotator’s abnormal behavior in crowdsourcing.
RIS
TY - CPAPER TI - False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking AU - QianQian Xu AU - Jiechao Xiong AU - Xiaochun Cao AU - Yuan Yao BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-xua16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1282 EP - 1291 L1 - http://proceedings.mlr.press/v48/xua16.pdf UR - https://proceedings.mlr.press/v48/xua16.html AB - With the rapid growth of crowdsourcing platforms it has become easy and relatively inexpensive to collect a dataset labeled by multiple annotators in a short time. However due to the lack of control over the quality of the annotators, some abnormal annotators may be affected by position bias which can potentially degrade the quality of the final consensus labels. In this paper we introduce a statistical framework to model and detect annotator’s position bias in order to control the false discovery rate (FDR) without a prior knowledge on the amount of biased annotators–the expected fraction of false discoveries among all discoveries being not too high, in order to assure that most of the discoveries are indeed true and replicable. The key technical development relies on some new knockoff filters adapted to our problem and new algorithms based on the Inverse Scale Space dynamics whose discretization is potentially suitable for large scale crowdsourcing data analysis. Our studies are supported by experiments with both simulated examples and real-world data. The proposed framework provides us a useful tool for quantitatively studying annotator’s abnormal behavior in crowdsourcing. ER -
APA
Xu, Q., Xiong, J., Cao, X. & Yao, Y.. (2016). False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1282-1291 Available from https://proceedings.mlr.press/v48/xua16.html.

Related Material