[edit]
Principal eigenstate classical shadows
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2122-2165, 2024.
Abstract
Given many copies of an unknown quantum state $\rho$, we consider the task of learning a classical description of its principal eigenstate. Namely, assuming that $\rho$ has an eigenstate $|\phi⟩$ with (unknown) eigenvalue $\lambda > 1/2$, the goal is to learn a (classical shadows style) classical description of $|\phi⟩$ which can later be used to estimate expectation values $⟨\phi |O | \phi ⟩$ for any $O$ in some class of observables. We consider the sample-complexity setting in which generating a copy of $\rho$ is expensive, but joint measurements on many copies of the state are possible. We present a protocol for this task scaling with the principal eigenvalue $\lambda$ and show that it is optimal within a space of natural approaches, e.g., applying quantum state purification followed by a single-copy classical shadows scheme. Furthermore, when $\lambda$ is sufficiently close to $1$, the performance of our algorithm is optimal—matching the sample complexity for pure state classical shadows.