Mitigating Underfitting in Learning to Defer with Consistent Losses

Shuqi Liu, Yuzhou Cao, Qiaozhen Zhang, Lei Feng, Bo An
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4816-4824, 2024.

Abstract

Learning to defer (L2D) allows the classifier to defer its prediction to an expert for safer predictions, by balancing the system’s accuracy and extra costs incurred by consulting the expert. Various loss functions have been proposed for L2D, but they were shown to cause the underfitting of trained classifiers when extra consulting costs exist, resulting in degraded performance. In this paper, we propose a novel loss formulation that can mitigate the underfitting issue while remaining the statistical consistency. We first show that our formulation can avoid a common characteristic shared by most existing losses, which has been shown to be a cause of underfitting, and show that it can be combined with the representative losses for L2D to enhance their performance and yield consistent losses. We further study the regret transfer bounds of the proposed losses and experimentally validate its improvements over existing methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-liu24h, title = {Mitigating Underfitting in Learning to Defer with Consistent Losses}, author = {Liu, Shuqi and Cao, Yuzhou and Zhang, Qiaozhen and Feng, Lei and An, Bo}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4816--4824}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/liu24h/liu24h.pdf}, url = {https://proceedings.mlr.press/v238/liu24h.html}, abstract = {Learning to defer (L2D) allows the classifier to defer its prediction to an expert for safer predictions, by balancing the system’s accuracy and extra costs incurred by consulting the expert. Various loss functions have been proposed for L2D, but they were shown to cause the underfitting of trained classifiers when extra consulting costs exist, resulting in degraded performance. In this paper, we propose a novel loss formulation that can mitigate the underfitting issue while remaining the statistical consistency. We first show that our formulation can avoid a common characteristic shared by most existing losses, which has been shown to be a cause of underfitting, and show that it can be combined with the representative losses for L2D to enhance their performance and yield consistent losses. We further study the regret transfer bounds of the proposed losses and experimentally validate its improvements over existing methods.} }
Endnote
%0 Conference Paper %T Mitigating Underfitting in Learning to Defer with Consistent Losses %A Shuqi Liu %A Yuzhou Cao %A Qiaozhen Zhang %A Lei Feng %A Bo An %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-liu24h %I PMLR %P 4816--4824 %U https://proceedings.mlr.press/v238/liu24h.html %V 238 %X Learning to defer (L2D) allows the classifier to defer its prediction to an expert for safer predictions, by balancing the system’s accuracy and extra costs incurred by consulting the expert. Various loss functions have been proposed for L2D, but they were shown to cause the underfitting of trained classifiers when extra consulting costs exist, resulting in degraded performance. In this paper, we propose a novel loss formulation that can mitigate the underfitting issue while remaining the statistical consistency. We first show that our formulation can avoid a common characteristic shared by most existing losses, which has been shown to be a cause of underfitting, and show that it can be combined with the representative losses for L2D to enhance their performance and yield consistent losses. We further study the regret transfer bounds of the proposed losses and experimentally validate its improvements over existing methods.
APA
Liu, S., Cao, Y., Zhang, Q., Feng, L. & An, B.. (2024). Mitigating Underfitting in Learning to Defer with Consistent Losses. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4816-4824 Available from https://proceedings.mlr.press/v238/liu24h.html.

Related Material