[edit]
Spectral risk-based learning using unbounded losses
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:1871-1886, 2022.
Abstract
In this work, we consider the setting of learning problems under a wide class of spectral risk (or "L-risk") functions, where a Lipschitz-continuous spectral density is used to flexibly assign weight to extreme loss values. We obtain excess risk guarantees for a derivative-free learning procedure under unbounded heavy-tailed loss distributions, and propose a computationally efficient implementation which empirically outperforms traditional risk minimizers in terms of balancing spectral risk and misclassification error.