Online Learning of Quantum States with Logarithmic Loss via VB-FTRL

Wei-Fu Tseng, Kai-Chun Chen, Zi-Hong Xiao, Yen-Huan Li
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:1285-1312, 2025.

Abstract

Online learning of quantum states with the logarithmic loss (LL-OLQS) is a quantum generalization of online portfolio selection (OPS), a classic open problem in online learning for over three decades. This problem also emerges in designing stochastic optimization algorithms for maximum-likelihood quantum state tomography. Recently, Jezequel et al. (2022, arXiv:2209.13932) proposed the VB-FTRL algorithm, the first regret-optimal algorithm for OPS with moderate computational complexity. In this paper, we generalize VB-FTRL for LL-OLQS. Let d denote the dimension and T the number of rounds. The generalized algorithm achieves a regret rate of O(d2log(d+T)) for LL-OLQS. Each iteration of the algorithm consists of solving a semidefinite program that can be implemented in polynomial time by, for example, cutting-plane methods. For comparison, the best-known regret rate for LL-OLQS is currently O(d2logT), achieved by an exponential weight method. However, no explicit implementation is available for the exponential weight method for LL-OLQS. To facilitate the generalization, we introduce the notion of VB-convexity. VB-convexity is a sufficient condition for the volumetric barrier associated with any function to be convex and is of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v272-tseng25a, title = {Online Learning of Quantum States with Logarithmic Loss via {VB}-{FTRL}}, author = {Tseng, Wei-Fu and Chen, Kai-Chun and Xiao, Zi-Hong and Li, Yen-Huan}, booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory}, pages = {1285--1312}, year = {2025}, editor = {Kamath, Gautam and Loh, Po-Ling}, volume = {272}, series = {Proceedings of Machine Learning Research}, month = {24--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v272/main/assets/tseng25a/tseng25a.pdf}, url = {https://proceedings.mlr.press/v272/tseng25a.html}, abstract = {Online learning of quantum states with the logarithmic loss (LL-OLQS) is a quantum generalization of online portfolio selection (OPS), a classic open problem in online learning for over three decades. This problem also emerges in designing stochastic optimization algorithms for maximum-likelihood quantum state tomography. Recently, Jezequel et al. (2022, arXiv:2209.13932) proposed the VB-FTRL algorithm, the first regret-optimal algorithm for OPS with moderate computational complexity. In this paper, we generalize VB-FTRL for LL-OLQS. Let $d$ denote the dimension and $T$ the number of rounds. The generalized algorithm achieves a regret rate of $O ( d^2 \log ( d + T ) )$ for LL-OLQS. Each iteration of the algorithm consists of solving a semidefinite program that can be implemented in polynomial time by, for example, cutting-plane methods. For comparison, the best-known regret rate for LL-OLQS is currently $O ( d^2 \log T )$, achieved by an exponential weight method. However, no explicit implementation is available for the exponential weight method for LL-OLQS. To facilitate the generalization, we introduce the notion of VB-convexity. VB-convexity is a sufficient condition for the volumetric barrier associated with any function to be convex and is of independent interest.} }
Endnote
%0 Conference Paper %T Online Learning of Quantum States with Logarithmic Loss via VB-FTRL %A Wei-Fu Tseng %A Kai-Chun Chen %A Zi-Hong Xiao %A Yen-Huan Li %B Proceedings of The 36th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Gautam Kamath %E Po-Ling Loh %F pmlr-v272-tseng25a %I PMLR %P 1285--1312 %U https://proceedings.mlr.press/v272/tseng25a.html %V 272 %X Online learning of quantum states with the logarithmic loss (LL-OLQS) is a quantum generalization of online portfolio selection (OPS), a classic open problem in online learning for over three decades. This problem also emerges in designing stochastic optimization algorithms for maximum-likelihood quantum state tomography. Recently, Jezequel et al. (2022, arXiv:2209.13932) proposed the VB-FTRL algorithm, the first regret-optimal algorithm for OPS with moderate computational complexity. In this paper, we generalize VB-FTRL for LL-OLQS. Let $d$ denote the dimension and $T$ the number of rounds. The generalized algorithm achieves a regret rate of $O ( d^2 \log ( d + T ) )$ for LL-OLQS. Each iteration of the algorithm consists of solving a semidefinite program that can be implemented in polynomial time by, for example, cutting-plane methods. For comparison, the best-known regret rate for LL-OLQS is currently $O ( d^2 \log T )$, achieved by an exponential weight method. However, no explicit implementation is available for the exponential weight method for LL-OLQS. To facilitate the generalization, we introduce the notion of VB-convexity. VB-convexity is a sufficient condition for the volumetric barrier associated with any function to be convex and is of independent interest.
APA
Tseng, W., Chen, K., Xiao, Z. & Li, Y.. (2025). Online Learning of Quantum States with Logarithmic Loss via VB-FTRL. Proceedings of The 36th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 272:1285-1312 Available from https://proceedings.mlr.press/v272/tseng25a.html.

Related Material