Flexible risk design using bi-directional dispersion

Matthew J. Holland
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:1586-1623, 2023.

Abstract

Many novel notions of “risk” (e.g., CVaR, tilted risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to loss tails on the upside, and tend to ignore deviations on the downside. We study a complementary new risk class that penalizes loss deviations in a bi-directional manner, while having more flexibility in terms of tail sensitivity than is offered by mean-variance. This class lets us derive high-probability learning guarantees without explicit gradient clipping, and empirical tests using both simulated and real data illustrate a high degree of control over key properties of the test loss distribution of gradient-based learners.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-holland23a, title = {Flexible risk design using bi-directional dispersion}, author = {Holland, Matthew J.}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {1586--1623}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/holland23a/holland23a.pdf}, url = {https://proceedings.mlr.press/v206/holland23a.html}, abstract = {Many novel notions of “risk” (e.g., CVaR, tilted risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to loss tails on the upside, and tend to ignore deviations on the downside. We study a complementary new risk class that penalizes loss deviations in a bi-directional manner, while having more flexibility in terms of tail sensitivity than is offered by mean-variance. This class lets us derive high-probability learning guarantees without explicit gradient clipping, and empirical tests using both simulated and real data illustrate a high degree of control over key properties of the test loss distribution of gradient-based learners.} }
Endnote
%0 Conference Paper %T Flexible risk design using bi-directional dispersion %A Matthew J. Holland %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-holland23a %I PMLR %P 1586--1623 %U https://proceedings.mlr.press/v206/holland23a.html %V 206 %X Many novel notions of “risk” (e.g., CVaR, tilted risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to loss tails on the upside, and tend to ignore deviations on the downside. We study a complementary new risk class that penalizes loss deviations in a bi-directional manner, while having more flexibility in terms of tail sensitivity than is offered by mean-variance. This class lets us derive high-probability learning guarantees without explicit gradient clipping, and empirical tests using both simulated and real data illustrate a high degree of control over key properties of the test loss distribution of gradient-based learners.
APA
Holland, M.J.. (2023). Flexible risk design using bi-directional dispersion. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:1586-1623 Available from https://proceedings.mlr.press/v206/holland23a.html.

Related Material