Flexible Tails for Normalizing Flows

Tennessee Hickling, Dennis Prangle
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:23155-23178, 2025.

Abstract

Normalizing flows are a flexible class of probability distributions, expressed as transformations of a simple base distribution. A limitation of standard normalizing flows is representing distributions with heavy tails, which arise in applications to both density estimation and variational inference. A popular current solution to this problem is to use a heavy tailed base distribution. We argue this can lead to poor performance due to the difficulty of optimising neural networks, such as normalizing flows, under heavy tailed input. We propose an alternative, "tail transform flow” (TTF), which uses a Gaussian base distribution and a final transformation layer which can produce heavy tails. Experimental results show this approach outperforms current methods, especially when the target distribution has large dimension or tail weight.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-hickling25a, title = {Flexible Tails for Normalizing Flows}, author = {Hickling, Tennessee and Prangle, Dennis}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {23155--23178}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/hickling25a/hickling25a.pdf}, url = {https://proceedings.mlr.press/v267/hickling25a.html}, abstract = {Normalizing flows are a flexible class of probability distributions, expressed as transformations of a simple base distribution. A limitation of standard normalizing flows is representing distributions with heavy tails, which arise in applications to both density estimation and variational inference. A popular current solution to this problem is to use a heavy tailed base distribution. We argue this can lead to poor performance due to the difficulty of optimising neural networks, such as normalizing flows, under heavy tailed input. We propose an alternative, "tail transform flow” (TTF), which uses a Gaussian base distribution and a final transformation layer which can produce heavy tails. Experimental results show this approach outperforms current methods, especially when the target distribution has large dimension or tail weight.} }
Endnote
%0 Conference Paper %T Flexible Tails for Normalizing Flows %A Tennessee Hickling %A Dennis Prangle %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-hickling25a %I PMLR %P 23155--23178 %U https://proceedings.mlr.press/v267/hickling25a.html %V 267 %X Normalizing flows are a flexible class of probability distributions, expressed as transformations of a simple base distribution. A limitation of standard normalizing flows is representing distributions with heavy tails, which arise in applications to both density estimation and variational inference. A popular current solution to this problem is to use a heavy tailed base distribution. We argue this can lead to poor performance due to the difficulty of optimising neural networks, such as normalizing flows, under heavy tailed input. We propose an alternative, "tail transform flow” (TTF), which uses a Gaussian base distribution and a final transformation layer which can produce heavy tails. Experimental results show this approach outperforms current methods, especially when the target distribution has large dimension or tail weight.
APA
Hickling, T. & Prangle, D.. (2025). Flexible Tails for Normalizing Flows. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:23155-23178 Available from https://proceedings.mlr.press/v267/hickling25a.html.

Related Material