Robust variance-regularized risk minimization with concomitant scaling

Matthew J Holland
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1144-1152, 2024.

Abstract

Under losses which are potentially heavy-tailed, we consider the task of minimizing sums of the loss mean and standard deviation, without trying to accurately estimate the variance. By modifying a technique for variance-free robust mean estimation to fit our problem setting, we derive a simple learning procedure which can be easily combined with standard gradient-based solvers to be used in traditional machine learning workflows. Empirically, we verify that our proposed approach, despite its simplicity, performs as well or better than even the best-performing candidates derived from alternative criteria such as CVaR or DRO risks on a variety of datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-j-holland24a, title = { Robust variance-regularized risk minimization with concomitant scaling }, author = {J Holland, Matthew}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1144--1152}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/j-holland24a/j-holland24a.pdf}, url = {https://proceedings.mlr.press/v238/j-holland24a.html}, abstract = { Under losses which are potentially heavy-tailed, we consider the task of minimizing sums of the loss mean and standard deviation, without trying to accurately estimate the variance. By modifying a technique for variance-free robust mean estimation to fit our problem setting, we derive a simple learning procedure which can be easily combined with standard gradient-based solvers to be used in traditional machine learning workflows. Empirically, we verify that our proposed approach, despite its simplicity, performs as well or better than even the best-performing candidates derived from alternative criteria such as CVaR or DRO risks on a variety of datasets. } }
Endnote
%0 Conference Paper %T Robust variance-regularized risk minimization with concomitant scaling %A Matthew J Holland %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-j-holland24a %I PMLR %P 1144--1152 %U https://proceedings.mlr.press/v238/j-holland24a.html %V 238 %X Under losses which are potentially heavy-tailed, we consider the task of minimizing sums of the loss mean and standard deviation, without trying to accurately estimate the variance. By modifying a technique for variance-free robust mean estimation to fit our problem setting, we derive a simple learning procedure which can be easily combined with standard gradient-based solvers to be used in traditional machine learning workflows. Empirically, we verify that our proposed approach, despite its simplicity, performs as well or better than even the best-performing candidates derived from alternative criteria such as CVaR or DRO risks on a variety of datasets.
APA
J Holland, M.. (2024). Robust variance-regularized risk minimization with concomitant scaling . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1144-1152 Available from https://proceedings.mlr.press/v238/j-holland24a.html.

Related Material