Modeling extremes with $d$-max-decreasing neural networks

Ali Hasan, Khalil Elkhalil, Yuting Ng, João M. Pereira, Sina Farsiu, Jose Blanchet, Vahid Tarokh
Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR 180:759-768, 2022.

Abstract

We propose a neural network architecture that enables non-parametric calibration and generation of multivariate extreme value distributions (MEVs). MEVs arise from Extreme Value Theory (EVT) as the necessary class of models when extrapolating a distributional fit over large spatial and temporal scales based on data observed in intermediate scales. In turn, EVT dictates that $d$-max-decreasing, a stronger form of convexity, is an essential shape constraint in the characterization of MEVs. As far as we know, our proposed architecture provides the first class of non-parametric estimators for MEVs that preserve these essential shape constraints. We show that the architecture approximates the dependence structure encoded by MEVs at parametric rate. Moreover, we present a new method for sampling high-dimensional MEVs using a generative model. We demonstrate our methodology on a wide range of experimental settings, ranging from environmental sciences to financial mathematics and verify that the structural properties of MEVs are retained compared to existing methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v180-hasan22a, title = {Modeling extremes with $d$-max-decreasing neural networks}, author = {Hasan, Ali and Elkhalil, Khalil and Ng, Yuting and Pereira, Jo\~ao M. and Farsiu, Sina and Blanchet, Jose and Tarokh, Vahid}, booktitle = {Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence}, pages = {759--768}, year = {2022}, editor = {Cussens, James and Zhang, Kun}, volume = {180}, series = {Proceedings of Machine Learning Research}, month = {01--05 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v180/hasan22a/hasan22a.pdf}, url = {https://proceedings.mlr.press/v180/hasan22a.html}, abstract = {We propose a neural network architecture that enables non-parametric calibration and generation of multivariate extreme value distributions (MEVs). MEVs arise from Extreme Value Theory (EVT) as the necessary class of models when extrapolating a distributional fit over large spatial and temporal scales based on data observed in intermediate scales. In turn, EVT dictates that $d$-max-decreasing, a stronger form of convexity, is an essential shape constraint in the characterization of MEVs. As far as we know, our proposed architecture provides the first class of non-parametric estimators for MEVs that preserve these essential shape constraints. We show that the architecture approximates the dependence structure encoded by MEVs at parametric rate. Moreover, we present a new method for sampling high-dimensional MEVs using a generative model. We demonstrate our methodology on a wide range of experimental settings, ranging from environmental sciences to financial mathematics and verify that the structural properties of MEVs are retained compared to existing methods.} }
Endnote
%0 Conference Paper %T Modeling extremes with $d$-max-decreasing neural networks %A Ali Hasan %A Khalil Elkhalil %A Yuting Ng %A João M. Pereira %A Sina Farsiu %A Jose Blanchet %A Vahid Tarokh %B Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2022 %E James Cussens %E Kun Zhang %F pmlr-v180-hasan22a %I PMLR %P 759--768 %U https://proceedings.mlr.press/v180/hasan22a.html %V 180 %X We propose a neural network architecture that enables non-parametric calibration and generation of multivariate extreme value distributions (MEVs). MEVs arise from Extreme Value Theory (EVT) as the necessary class of models when extrapolating a distributional fit over large spatial and temporal scales based on data observed in intermediate scales. In turn, EVT dictates that $d$-max-decreasing, a stronger form of convexity, is an essential shape constraint in the characterization of MEVs. As far as we know, our proposed architecture provides the first class of non-parametric estimators for MEVs that preserve these essential shape constraints. We show that the architecture approximates the dependence structure encoded by MEVs at parametric rate. Moreover, we present a new method for sampling high-dimensional MEVs using a generative model. We demonstrate our methodology on a wide range of experimental settings, ranging from environmental sciences to financial mathematics and verify that the structural properties of MEVs are retained compared to existing methods.
APA
Hasan, A., Elkhalil, K., Ng, Y., Pereira, J.M., Farsiu, S., Blanchet, J. & Tarokh, V.. (2022). Modeling extremes with $d$-max-decreasing neural networks. Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 180:759-768 Available from https://proceedings.mlr.press/v180/hasan22a.html.

Related Material