[edit]
Mitigating Underfitting in Learning to Defer with Consistent Losses
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4816-4824, 2024.
Abstract
Learning to defer (L2D) allows the classifier to defer its prediction to an expert for safer predictions, by balancing the system’s accuracy and extra costs incurred by consulting the expert. Various loss functions have been proposed for L2D, but they were shown to cause the underfitting of trained classifiers when extra consulting costs exist, resulting in degraded performance. In this paper, we propose a novel loss formulation that can mitigate the underfitting issue while remaining the statistical consistency. We first show that our formulation can avoid a common characteristic shared by most existing losses, which has been shown to be a cause of underfitting, and show that it can be combined with the representative losses for L2D to enhance their performance and yield consistent losses. We further study the regret transfer bounds of the proposed losses and experimentally validate its improvements over existing methods.