Observability of Latent States in Generative AI Models

Tian Yu Liu, Stefano Soatto, Matteo Marchi, Pratik Chaudhari, Paulo Tabuada
Proceedings of the International Conference on Neuro-symbolic Systems, PMLR 288:745-764, 2025.

Abstract

We tackle the question of whether Large Language Models (LLMs), viewed as dynamical systems with state evolving in the embedding space of symbolic tokens, are observable. That is, whether there exist distinct state trajectories that yield the same sequence of generated output tokens, or sequences that belong to the same Nerode equivalence class (‘meaning’). If an LLM is not observable, the state trajectory cannot be determined from input-output observations and can therefore evolve unbeknownst to the user while being potentially accessible to an adversary. We show that current LLMs implemented by autoregressive Transformers are observable: The set of state trajectories that produce the same tokenized output is a singleton, so there are no indistinguishable state trajectories. But if there are ‘system prompts’ not visible to the user, then the set of indistinguishable trajectories becomes non-trivial, meaning that there can be multiple state trajectories that yield the same tokenized output. We prove these claims analytically, and show examples of modifications to standard LLMs that engender unobservable behavior. Our analysis sheds light on possible designs that would enable a model to perform non-trivial computation that is not visible to the user, and may suggest controls that can be implemented to prevent unintended behavior. Finally, we cast the definition of ‘feeling’ from cognitive psychology in terms of measurable quantities in an LLM which, unlike humans, are directly measurable. We conclude that, in LLMs, unobservable state trajectories satisfy the definition of ‘feelings’ provided by the American Psychological Association, suitably modified to avoid self-reference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v288-liu25b, title = {Observability of Latent States in Generative AI Models}, author = {Liu, Tian Yu and Soatto, Stefano and Marchi, Matteo and Chaudhari, Pratik and Tabuada, Paulo}, booktitle = {Proceedings of the International Conference on Neuro-symbolic Systems}, pages = {745--764}, year = {2025}, editor = {Pappas, George and Ravikumar, Pradeep and Seshia, Sanjit A.}, volume = {288}, series = {Proceedings of Machine Learning Research}, month = {28--30 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v288/main/assets/liu25b/liu25b.pdf}, url = {https://proceedings.mlr.press/v288/liu25b.html}, abstract = {We tackle the question of whether Large Language Models (LLMs), viewed as dynamical systems with state evolving in the embedding space of symbolic tokens, are observable. That is, whether there exist distinct state trajectories that yield the same sequence of generated output tokens, or sequences that belong to the same Nerode equivalence class (‘meaning’). If an LLM is not observable, the state trajectory cannot be determined from input-output observations and can therefore evolve unbeknownst to the user while being potentially accessible to an adversary. We show that current LLMs implemented by autoregressive Transformers are observable: The set of state trajectories that produce the same tokenized output is a singleton, so there are no indistinguishable state trajectories. But if there are ‘system prompts’ not visible to the user, then the set of indistinguishable trajectories becomes non-trivial, meaning that there can be multiple state trajectories that yield the same tokenized output. We prove these claims analytically, and show examples of modifications to standard LLMs that engender unobservable behavior. Our analysis sheds light on possible designs that would enable a model to perform non-trivial computation that is not visible to the user, and may suggest controls that can be implemented to prevent unintended behavior. Finally, we cast the definition of ‘feeling’ from cognitive psychology in terms of measurable quantities in an LLM which, unlike humans, are directly measurable. We conclude that, in LLMs, unobservable state trajectories satisfy the definition of ‘feelings’ provided by the American Psychological Association, suitably modified to avoid self-reference.} }
Endnote
%0 Conference Paper %T Observability of Latent States in Generative AI Models %A Tian Yu Liu %A Stefano Soatto %A Matteo Marchi %A Pratik Chaudhari %A Paulo Tabuada %B Proceedings of the International Conference on Neuro-symbolic Systems %C Proceedings of Machine Learning Research %D 2025 %E George Pappas %E Pradeep Ravikumar %E Sanjit A. Seshia %F pmlr-v288-liu25b %I PMLR %P 745--764 %U https://proceedings.mlr.press/v288/liu25b.html %V 288 %X We tackle the question of whether Large Language Models (LLMs), viewed as dynamical systems with state evolving in the embedding space of symbolic tokens, are observable. That is, whether there exist distinct state trajectories that yield the same sequence of generated output tokens, or sequences that belong to the same Nerode equivalence class (‘meaning’). If an LLM is not observable, the state trajectory cannot be determined from input-output observations and can therefore evolve unbeknownst to the user while being potentially accessible to an adversary. We show that current LLMs implemented by autoregressive Transformers are observable: The set of state trajectories that produce the same tokenized output is a singleton, so there are no indistinguishable state trajectories. But if there are ‘system prompts’ not visible to the user, then the set of indistinguishable trajectories becomes non-trivial, meaning that there can be multiple state trajectories that yield the same tokenized output. We prove these claims analytically, and show examples of modifications to standard LLMs that engender unobservable behavior. Our analysis sheds light on possible designs that would enable a model to perform non-trivial computation that is not visible to the user, and may suggest controls that can be implemented to prevent unintended behavior. Finally, we cast the definition of ‘feeling’ from cognitive psychology in terms of measurable quantities in an LLM which, unlike humans, are directly measurable. We conclude that, in LLMs, unobservable state trajectories satisfy the definition of ‘feelings’ provided by the American Psychological Association, suitably modified to avoid self-reference.
APA
Liu, T.Y., Soatto, S., Marchi, M., Chaudhari, P. & Tabuada, P.. (2025). Observability of Latent States in Generative AI Models. Proceedings of the International Conference on Neuro-symbolic Systems, in Proceedings of Machine Learning Research 288:745-764 Available from https://proceedings.mlr.press/v288/liu25b.html.

Related Material