Principal eigenstate classical shadows

Daniel Grier, Hakop Pashayan, Luke Schaeffer
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2122-2165, 2024.

Abstract

Given many copies of an unknown quantum state $\rho$, we consider the task of learning a classical description of its principal eigenstate. Namely, assuming that $\rho$ has an eigenstate $|\phi⟩$ with (unknown) eigenvalue $\lambda > 1/2$, the goal is to learn a (classical shadows style) classical description of $|\phi⟩$ which can later be used to estimate expectation values $⟨\phi |O | \phi ⟩$ for any $O$ in some class of observables. We consider the sample-complexity setting in which generating a copy of $\rho$ is expensive, but joint measurements on many copies of the state are possible. We present a protocol for this task scaling with the principal eigenvalue $\lambda$ and show that it is optimal within a space of natural approaches, e.g., applying quantum state purification followed by a single-copy classical shadows scheme. Furthermore, when $\lambda$ is sufficiently close to $1$, the performance of our algorithm is optimal—matching the sample complexity for pure state classical shadows.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-grier24a, title = {Principal eigenstate classical shadows}, author = {Grier, Daniel and Pashayan, Hakop and Schaeffer, Luke}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {2122--2165}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/grier24a/grier24a.pdf}, url = {https://proceedings.mlr.press/v247/grier24a.html}, abstract = {Given many copies of an unknown quantum state $\rho$, we consider the task of learning a classical description of its principal eigenstate. Namely, assuming that $\rho$ has an eigenstate $|\phi⟩$ with (unknown) eigenvalue $\lambda > 1/2$, the goal is to learn a (classical shadows style) classical description of $|\phi⟩$ which can later be used to estimate expectation values $⟨\phi |O | \phi ⟩$ for any $O$ in some class of observables. We consider the sample-complexity setting in which generating a copy of $\rho$ is expensive, but joint measurements on many copies of the state are possible. We present a protocol for this task scaling with the principal eigenvalue $\lambda$ and show that it is optimal within a space of natural approaches, e.g., applying quantum state purification followed by a single-copy classical shadows scheme. Furthermore, when $\lambda$ is sufficiently close to $1$, the performance of our algorithm is optimal—matching the sample complexity for pure state classical shadows.} }
Endnote
%0 Conference Paper %T Principal eigenstate classical shadows %A Daniel Grier %A Hakop Pashayan %A Luke Schaeffer %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-grier24a %I PMLR %P 2122--2165 %U https://proceedings.mlr.press/v247/grier24a.html %V 247 %X Given many copies of an unknown quantum state $\rho$, we consider the task of learning a classical description of its principal eigenstate. Namely, assuming that $\rho$ has an eigenstate $|\phi⟩$ with (unknown) eigenvalue $\lambda > 1/2$, the goal is to learn a (classical shadows style) classical description of $|\phi⟩$ which can later be used to estimate expectation values $⟨\phi |O | \phi ⟩$ for any $O$ in some class of observables. We consider the sample-complexity setting in which generating a copy of $\rho$ is expensive, but joint measurements on many copies of the state are possible. We present a protocol for this task scaling with the principal eigenvalue $\lambda$ and show that it is optimal within a space of natural approaches, e.g., applying quantum state purification followed by a single-copy classical shadows scheme. Furthermore, when $\lambda$ is sufficiently close to $1$, the performance of our algorithm is optimal—matching the sample complexity for pure state classical shadows.
APA
Grier, D., Pashayan, H. & Schaeffer, L.. (2024). Principal eigenstate classical shadows. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:2122-2165 Available from https://proceedings.mlr.press/v247/grier24a.html.

Related Material