Fixed-kinetic Neural Hamiltonian Flows for enhanced interpretability and reduced complexity

Vincent Souveton, Arnaud Guillin, Jens Jasche, Guilhem Lavaux, Manon Michel
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3178-3186, 2024.

Abstract

Normalizing Flows (NF) are Generative models which transform a simple prior distribution into the desired target. They however require the design of an invertible mapping whose Jacobian determinant has to be computable. Recently introduced, Neural Hamiltonian Flows (NHF) are Hamiltonian dynamics-based flows, which are continuous, volume-preserving and invertible and thus make for natural candidates for robust NF architectures. In particular, their similarity to classical Mechanics could lead to easier interpretability of the learned mapping. In this paper, we show that the current NHF architecture may still pose a challenge to interpretability. Inspired by Physics, we introduce a fixed-kinetic energy version of the model. This approach improves interpretability and robustness while requiring fewer parameters than the original model. We illustrate that on a 2D Gaussian mixture and on the MNIST and Fashion-MNIST datasets. Finally, we show how to adapt NHF to the context of Bayesian inference and illustrate the method on an example from cosmology.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-souveton24a, title = { Fixed-kinetic Neural {H}amiltonian Flows for enhanced interpretability and reduced complexity }, author = {Souveton, Vincent and Guillin, Arnaud and Jasche, Jens and Lavaux, Guilhem and Michel, Manon}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {3178--3186}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/souveton24a/souveton24a.pdf}, url = {https://proceedings.mlr.press/v238/souveton24a.html}, abstract = { Normalizing Flows (NF) are Generative models which transform a simple prior distribution into the desired target. They however require the design of an invertible mapping whose Jacobian determinant has to be computable. Recently introduced, Neural Hamiltonian Flows (NHF) are Hamiltonian dynamics-based flows, which are continuous, volume-preserving and invertible and thus make for natural candidates for robust NF architectures. In particular, their similarity to classical Mechanics could lead to easier interpretability of the learned mapping. In this paper, we show that the current NHF architecture may still pose a challenge to interpretability. Inspired by Physics, we introduce a fixed-kinetic energy version of the model. This approach improves interpretability and robustness while requiring fewer parameters than the original model. We illustrate that on a 2D Gaussian mixture and on the MNIST and Fashion-MNIST datasets. Finally, we show how to adapt NHF to the context of Bayesian inference and illustrate the method on an example from cosmology. } }
Endnote
%0 Conference Paper %T Fixed-kinetic Neural Hamiltonian Flows for enhanced interpretability and reduced complexity %A Vincent Souveton %A Arnaud Guillin %A Jens Jasche %A Guilhem Lavaux %A Manon Michel %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-souveton24a %I PMLR %P 3178--3186 %U https://proceedings.mlr.press/v238/souveton24a.html %V 238 %X Normalizing Flows (NF) are Generative models which transform a simple prior distribution into the desired target. They however require the design of an invertible mapping whose Jacobian determinant has to be computable. Recently introduced, Neural Hamiltonian Flows (NHF) are Hamiltonian dynamics-based flows, which are continuous, volume-preserving and invertible and thus make for natural candidates for robust NF architectures. In particular, their similarity to classical Mechanics could lead to easier interpretability of the learned mapping. In this paper, we show that the current NHF architecture may still pose a challenge to interpretability. Inspired by Physics, we introduce a fixed-kinetic energy version of the model. This approach improves interpretability and robustness while requiring fewer parameters than the original model. We illustrate that on a 2D Gaussian mixture and on the MNIST and Fashion-MNIST datasets. Finally, we show how to adapt NHF to the context of Bayesian inference and illustrate the method on an example from cosmology.
APA
Souveton, V., Guillin, A., Jasche, J., Lavaux, G. & Michel, M.. (2024). Fixed-kinetic Neural Hamiltonian Flows for enhanced interpretability and reduced complexity . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:3178-3186 Available from https://proceedings.mlr.press/v238/souveton24a.html.

Related Material