Learning with risk-averse feedback under potentially heavy tails

Matthew Holland, El Mehdi Haress
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:892-900, 2021.

Abstract

We study learning algorithms that seek to minimize the conditional value-at-risk (CVaR), when all the learner knows is that the losses (and gradients) incurred may be heavy-tailed. We begin by studying a general-purpose estimator of CVaR for potentially heavy-tailed random variables, which is easy to implement in practice, and requires nothing more than finite variance and a distribution function that does not change too fast or slow around just the quantile of interest. With this estimator in hand, we then derive a new learning algorithm which robustly chooses among candidates produced by stochastic gradient-driven sub-processes, obtain excess CVaR bounds, and finally complement the theory with a regression application.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-holland21b, title = { Learning with risk-averse feedback under potentially heavy tails }, author = {Holland, Matthew and Mehdi Haress, El}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {892--900}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/holland21b/holland21b.pdf}, url = {http://proceedings.mlr.press/v130/holland21b.html}, abstract = { We study learning algorithms that seek to minimize the conditional value-at-risk (CVaR), when all the learner knows is that the losses (and gradients) incurred may be heavy-tailed. We begin by studying a general-purpose estimator of CVaR for potentially heavy-tailed random variables, which is easy to implement in practice, and requires nothing more than finite variance and a distribution function that does not change too fast or slow around just the quantile of interest. With this estimator in hand, we then derive a new learning algorithm which robustly chooses among candidates produced by stochastic gradient-driven sub-processes, obtain excess CVaR bounds, and finally complement the theory with a regression application. } }
Endnote
%0 Conference Paper %T Learning with risk-averse feedback under potentially heavy tails %A Matthew Holland %A El Mehdi Haress %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-holland21b %I PMLR %P 892--900 %U http://proceedings.mlr.press/v130/holland21b.html %V 130 %X We study learning algorithms that seek to minimize the conditional value-at-risk (CVaR), when all the learner knows is that the losses (and gradients) incurred may be heavy-tailed. We begin by studying a general-purpose estimator of CVaR for potentially heavy-tailed random variables, which is easy to implement in practice, and requires nothing more than finite variance and a distribution function that does not change too fast or slow around just the quantile of interest. With this estimator in hand, we then derive a new learning algorithm which robustly chooses among candidates produced by stochastic gradient-driven sub-processes, obtain excess CVaR bounds, and finally complement the theory with a regression application.
APA
Holland, M. & Mehdi Haress, E.. (2021). Learning with risk-averse feedback under potentially heavy tails . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:892-900 Available from http://proceedings.mlr.press/v130/holland21b.html.

Related Material