[edit]
Principal eigenstate classical shadows
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2122-2165, 2024.
Abstract
Given many copies of an unknown quantum state ρ, we consider the task of learning a classical description of its principal eigenstate. Namely, assuming that ρ has an eigenstate |ϕ⟩ with (unknown) eigenvalue λ>1/2, the goal is to learn a (classical shadows style) classical description of |ϕ⟩ which can later be used to estimate expectation values ⟨ϕ|O|ϕ⟩ for any O in some class of observables. We consider the sample-complexity setting in which generating a copy of ρ is expensive, but joint measurements on many copies of the state are possible. We present a protocol for this task scaling with the principal eigenvalue λ and show that it is optimal within a space of natural approaches, e.g., applying quantum state purification followed by a single-copy classical shadows scheme. Furthermore, when λ is sufficiently close to 1, the performance of our algorithm is optimal—matching the sample complexity for pure state classical shadows.