[edit]
Learning from Corrupted Binary Labels via Class-Probability Estimation
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:125-134, 2015.
Abstract
Many supervised learning problems involve learning from samples whose labels are corrupted in some way. For example, each sample may have some constant probability of being incorrectly labelled (learning with label noise), or one may have a pool of unlabelled samples in lieu of negative samples (learning from positive and unlabelled data). This paper uses class-probability estimation to study these and other corruption processes belonging to the mutually contaminated distributions framework (Scott et al., 2013), with three conclusions. First, one can optimise balanced error and AUC without knowledge of the corruption process parameters. Second, given estimates of the corruption parameters, one can minimise a range of classification risks. Third, one can estimate the corruption parameters using only corrupted data. Experiments confirm the efficacy of class-probability estimation in learning from corrupted labels.