Quantum Ground States from Reinforcement Learning

Ariel Barr, Willem Gispen, Austen Lamacraft
; Proceedings of The First Mathematical and Scientific Machine Learning Conference, PMLR 107:635-653, 2020.

Abstract

Finding the ground state of a quantum mechanical system can be formulated as an optimal control problem. In this formulation, the drift of the optimally controlled process is chosen to match the distribution of paths in the Feynman–Kac (FK) representation of the solution of the imaginary time Schrödinger equation. This provides a variational principle that can be used for reinforcement learning of a neural representation of the drift. Our approach is a drop-in replacement for path integral Monte Carlo, learning an optimal importance sampler for the FK trajectories. We demonstrate the applicability of our approach to several problems of one-, two-, and many-particle physics.

Cite this Paper


BibTeX
@InProceedings{pmlr-v107-barr20a, title = {Quantum Ground States from Reinforcement Learning}, author = {Barr, Ariel and Gispen, Willem and Lamacraft, Austen}, booktitle = {Proceedings of The First Mathematical and Scientific Machine Learning Conference}, pages = {635--653}, year = {2020}, editor = {Jianfeng Lu and Rachel Ward}, volume = {107}, series = {Proceedings of Machine Learning Research}, address = {Princeton University, Princeton, NJ, USA}, month = {20--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v107/barr20a/barr20a.pdf}, url = {http://proceedings.mlr.press/v107/barr20a.html}, abstract = { Finding the ground state of a quantum mechanical system can be formulated as an optimal control problem. In this formulation, the drift of the optimally controlled process is chosen to match the distribution of paths in the Feynman–Kac (FK) representation of the solution of the imaginary time Schrödinger equation. This provides a variational principle that can be used for reinforcement learning of a neural representation of the drift. Our approach is a drop-in replacement for path integral Monte Carlo, learning an optimal importance sampler for the FK trajectories. We demonstrate the applicability of our approach to several problems of one-, two-, and many-particle physics.} }
Endnote
%0 Conference Paper %T Quantum Ground States from Reinforcement Learning %A Ariel Barr %A Willem Gispen %A Austen Lamacraft %B Proceedings of The First Mathematical and Scientific Machine Learning Conference %C Proceedings of Machine Learning Research %D 2020 %E Jianfeng Lu %E Rachel Ward %F pmlr-v107-barr20a %I PMLR %J Proceedings of Machine Learning Research %P 635--653 %U http://proceedings.mlr.press %V 107 %W PMLR %X Finding the ground state of a quantum mechanical system can be formulated as an optimal control problem. In this formulation, the drift of the optimally controlled process is chosen to match the distribution of paths in the Feynman–Kac (FK) representation of the solution of the imaginary time Schrödinger equation. This provides a variational principle that can be used for reinforcement learning of a neural representation of the drift. Our approach is a drop-in replacement for path integral Monte Carlo, learning an optimal importance sampler for the FK trajectories. We demonstrate the applicability of our approach to several problems of one-, two-, and many-particle physics.
APA
Barr, A., Gispen, W. & Lamacraft, A.. (2020). Quantum Ground States from Reinforcement Learning. Proceedings of The First Mathematical and Scientific Machine Learning Conference, in PMLR 107:635-653

Related Material