Time-Uniform Self-Normalized Concentration for Vector-Valued Processes (Extended Abstract)

Justin Whitehouse, Zhiwei Steven Wu, Aaditya Ramdas
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:5714-5715, 2025.

Abstract

Self-normalized processes arise naturally in many learning-related tasks. While self-normalized concentration has been extensively studied for scalar-valued processes, there are few results for multidimensional processes outside of the sub-Gaussian setting. In this work, we construct a general, self-normalized inequality for multivariate processes that satisfy a simple yet broad “sub-$\psi$” tail condition, which generalizes assumptions based on cumulant generating functions. From this general inequality, we derive an upper law of the iterated logarithm for sub-$\psi$ vector-valued processes, which is tight up to small constants. We show how our inequality can be leveraged to derive a variety of novel, self-normalized concentration inequalities under both light and heavy-tailed observations. Further, we provide applications in prototypical statistical tasks, such as parameter estimation in online linear regression, autoregressive modeling, and bounded mean estimation via a new (multivariate) empirical Bernstein concentration inequality.

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-whitehouse25b, title = {Time-Uniform Self-Normalized Concentration for Vector-Valued Processes (Extended Abstract)}, author = {Whitehouse, Justin and Wu, Zhiwei Steven and Ramdas, Aaditya}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {5714--5715}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/whitehouse25b/whitehouse25b.pdf}, url = {https://proceedings.mlr.press/v291/whitehouse25b.html}, abstract = {Self-normalized processes arise naturally in many learning-related tasks. While self-normalized concentration has been extensively studied for scalar-valued processes, there are few results for multidimensional processes outside of the sub-Gaussian setting. In this work, we construct a general, self-normalized inequality for multivariate processes that satisfy a simple yet broad “sub-$\psi$” tail condition, which generalizes assumptions based on cumulant generating functions. From this general inequality, we derive an upper law of the iterated logarithm for sub-$\psi$ vector-valued processes, which is tight up to small constants. We show how our inequality can be leveraged to derive a variety of novel, self-normalized concentration inequalities under both light and heavy-tailed observations. Further, we provide applications in prototypical statistical tasks, such as parameter estimation in online linear regression, autoregressive modeling, and bounded mean estimation via a new (multivariate) empirical Bernstein concentration inequality. } }
Endnote
%0 Conference Paper %T Time-Uniform Self-Normalized Concentration for Vector-Valued Processes (Extended Abstract) %A Justin Whitehouse %A Zhiwei Steven Wu %A Aaditya Ramdas %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-whitehouse25b %I PMLR %P 5714--5715 %U https://proceedings.mlr.press/v291/whitehouse25b.html %V 291 %X Self-normalized processes arise naturally in many learning-related tasks. While self-normalized concentration has been extensively studied for scalar-valued processes, there are few results for multidimensional processes outside of the sub-Gaussian setting. In this work, we construct a general, self-normalized inequality for multivariate processes that satisfy a simple yet broad “sub-$\psi$” tail condition, which generalizes assumptions based on cumulant generating functions. From this general inequality, we derive an upper law of the iterated logarithm for sub-$\psi$ vector-valued processes, which is tight up to small constants. We show how our inequality can be leveraged to derive a variety of novel, self-normalized concentration inequalities under both light and heavy-tailed observations. Further, we provide applications in prototypical statistical tasks, such as parameter estimation in online linear regression, autoregressive modeling, and bounded mean estimation via a new (multivariate) empirical Bernstein concentration inequality.
APA
Whitehouse, J., Wu, Z.S. & Ramdas, A.. (2025). Time-Uniform Self-Normalized Concentration for Vector-Valued Processes (Extended Abstract). Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:5714-5715 Available from https://proceedings.mlr.press/v291/whitehouse25b.html.

Related Material