Ground States of Quantum Many Body Lattice Models via Reinforcement Learning

Willem Gispen, Austen Lamacraft
Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, PMLR 145:369-385, 2022.

Abstract

We introduce reinforcement learning (RL) formulations of the problem of finding the ground state of a many-body quantum mechanical model defined on a lattice. We show that stoquastic Hamilto- nians – those without a sign problem – have a natural decomposition into stochastic dynamics and a potential representing a reward function. The mapping to RL is developed for both continuous and discrete time, based on a generalized Feynman–Kac formula in the former case and a stochastic representation of the Schro ̈dinger equation in the latter. We discuss the application of this mapping to the neural representation of quantum states, spelling out the advantages over approaches based on direct representation of the wavefunction of the system.

Cite this Paper


BibTeX
@InProceedings{pmlr-v145-gispen22a, title = {Ground States of Quantum Many Body Lattice Models via Reinforcement Learning}, author = {Gispen, Willem and Lamacraft, Austen}, booktitle = {Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference}, pages = {369--385}, year = {2022}, editor = {Bruna, Joan and Hesthaven, Jan and Zdeborova, Lenka}, volume = {145}, series = {Proceedings of Machine Learning Research}, month = {16--19 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v145/gispen22a/gispen22a.pdf}, url = {https://proceedings.mlr.press/v145/gispen22a.html}, abstract = {We introduce reinforcement learning (RL) formulations of the problem of finding the ground state of a many-body quantum mechanical model defined on a lattice. We show that stoquastic Hamilto- nians – those without a sign problem – have a natural decomposition into stochastic dynamics and a potential representing a reward function. The mapping to RL is developed for both continuous and discrete time, based on a generalized Feynman–Kac formula in the former case and a stochastic representation of the Schro ̈dinger equation in the latter. We discuss the application of this mapping to the neural representation of quantum states, spelling out the advantages over approaches based on direct representation of the wavefunction of the system. } }
Endnote
%0 Conference Paper %T Ground States of Quantum Many Body Lattice Models via Reinforcement Learning %A Willem Gispen %A Austen Lamacraft %B Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference %C Proceedings of Machine Learning Research %D 2022 %E Joan Bruna %E Jan Hesthaven %E Lenka Zdeborova %F pmlr-v145-gispen22a %I PMLR %P 369--385 %U https://proceedings.mlr.press/v145/gispen22a.html %V 145 %X We introduce reinforcement learning (RL) formulations of the problem of finding the ground state of a many-body quantum mechanical model defined on a lattice. We show that stoquastic Hamilto- nians – those without a sign problem – have a natural decomposition into stochastic dynamics and a potential representing a reward function. The mapping to RL is developed for both continuous and discrete time, based on a generalized Feynman–Kac formula in the former case and a stochastic representation of the Schro ̈dinger equation in the latter. We discuss the application of this mapping to the neural representation of quantum states, spelling out the advantages over approaches based on direct representation of the wavefunction of the system.
APA
Gispen, W. & Lamacraft, A.. (2022). Ground States of Quantum Many Body Lattice Models via Reinforcement Learning. Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, in Proceedings of Machine Learning Research 145:369-385 Available from https://proceedings.mlr.press/v145/gispen22a.html.

Related Material