Neural Langevin Dynamics: Towards Interpretable Neural Stochastic Differential Equations

Simon Martinus Koop, Mark A Peletier, Jacobus Willem Portegies, Vlado Menkovski
Proceedings of the 5th Northern Lights Deep Learning Conference ({NLDL}), PMLR 233:130-137, 2024.

Abstract

Neural Stochastic Differential Equations (NSDE) have been trained as both Variational Autoencoders, and as GANs. However, the resulting Stochastic Differential Equations can be hard to interpret or analyse due to the generic nature of the drift and diffusion fields. By restricting our NSDE to be of the form of Langevin dynamics and training it as a VAE, we obtain NSDEs that lend themselves to more elaborate analysis and to a wider range of visualisation techniques than a generic NSDE. More specifically, we obtain an energy landscape, the minima of which are in one-to-one correspondence with latent states underlying the used data. This not only allows us to detect states underlying the data dynamics in an unsupervised manner but also to infer the distribution of time spent in each state according to the learned SDE. In general, restricting an NSDE to Langevin dynamics enables the use of a large set of tools from computational molecular dynamics for the analysis of the obtained results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v233-koop24a, title = {Neural Langevin Dynamics: Towards Interpretable Neural Stochastic Differential Equations}, author = {Koop, Simon Martinus and Peletier, Mark A and Portegies, Jacobus Willem and Menkovski, Vlado}, booktitle = {Proceedings of the 5th Northern Lights Deep Learning Conference ({NLDL})}, pages = {130--137}, year = {2024}, editor = {Lutchyn, Tetiana and Ramírez Rivera, Adín and Ricaud, Benjamin}, volume = {233}, series = {Proceedings of Machine Learning Research}, month = {09--11 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v233/koop24a/koop24a.pdf}, url = {https://proceedings.mlr.press/v233/koop24a.html}, abstract = {Neural Stochastic Differential Equations (NSDE) have been trained as both Variational Autoencoders, and as GANs. However, the resulting Stochastic Differential Equations can be hard to interpret or analyse due to the generic nature of the drift and diffusion fields. By restricting our NSDE to be of the form of Langevin dynamics and training it as a VAE, we obtain NSDEs that lend themselves to more elaborate analysis and to a wider range of visualisation techniques than a generic NSDE. More specifically, we obtain an energy landscape, the minima of which are in one-to-one correspondence with latent states underlying the used data. This not only allows us to detect states underlying the data dynamics in an unsupervised manner but also to infer the distribution of time spent in each state according to the learned SDE. In general, restricting an NSDE to Langevin dynamics enables the use of a large set of tools from computational molecular dynamics for the analysis of the obtained results.} }
Endnote
%0 Conference Paper %T Neural Langevin Dynamics: Towards Interpretable Neural Stochastic Differential Equations %A Simon Martinus Koop %A Mark A Peletier %A Jacobus Willem Portegies %A Vlado Menkovski %B Proceedings of the 5th Northern Lights Deep Learning Conference ({NLDL}) %C Proceedings of Machine Learning Research %D 2024 %E Tetiana Lutchyn %E Adín Ramírez Rivera %E Benjamin Ricaud %F pmlr-v233-koop24a %I PMLR %P 130--137 %U https://proceedings.mlr.press/v233/koop24a.html %V 233 %X Neural Stochastic Differential Equations (NSDE) have been trained as both Variational Autoencoders, and as GANs. However, the resulting Stochastic Differential Equations can be hard to interpret or analyse due to the generic nature of the drift and diffusion fields. By restricting our NSDE to be of the form of Langevin dynamics and training it as a VAE, we obtain NSDEs that lend themselves to more elaborate analysis and to a wider range of visualisation techniques than a generic NSDE. More specifically, we obtain an energy landscape, the minima of which are in one-to-one correspondence with latent states underlying the used data. This not only allows us to detect states underlying the data dynamics in an unsupervised manner but also to infer the distribution of time spent in each state according to the learned SDE. In general, restricting an NSDE to Langevin dynamics enables the use of a large set of tools from computational molecular dynamics for the analysis of the obtained results.
APA
Koop, S.M., Peletier, M.A., Portegies, J.W. & Menkovski, V.. (2024). Neural Langevin Dynamics: Towards Interpretable Neural Stochastic Differential Equations. Proceedings of the 5th Northern Lights Deep Learning Conference ({NLDL}), in Proceedings of Machine Learning Research 233:130-137 Available from https://proceedings.mlr.press/v233/koop24a.html.

Related Material