Simulating weighted automata over sequences and trees with transformers

Michael Rizvi-Martel, Maude Lizaire, Clara Lacroce, Guillaume Rabusseau
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2368-2376, 2024.

Abstract

Transformers are ubiquitous models in the natural language processing (NLP) community and have shown impressive empirical successes in the past few years. However, little is understood about how they reason and the limits of their computational capabilities. These models do not process data sequentially, and yet outperform sequential neural models such as RNNs. Recent work has shown that these models can compactly simulate the sequential reasoning abilities of deterministic finite automata (DFAs). This leads to the following question: can transformers simulate the reasoning of more complex finite state machines? In this work, we show that transformers can simulate weighted finite automata (WFAs), a class of models which subsumes DFAs, as well as weighted tree automata (WTA), a generalization of weighted automata to tree structured inputs. We prove these claims formally and provide upper bounds on the size of the transformer models needed as a function of the number of states of the target automata. Empirically, we perform synthetic experiments showing that transformers are able to learn these compact solutions via standard gradient-based training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-rizvi-martel24a, title = {Simulating weighted automata over sequences and trees with transformers}, author = {Rizvi-Martel, Michael and Lizaire, Maude and Lacroce, Clara and Rabusseau, Guillaume}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2368--2376}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/rizvi-martel24a/rizvi-martel24a.pdf}, url = {https://proceedings.mlr.press/v238/rizvi-martel24a.html}, abstract = {Transformers are ubiquitous models in the natural language processing (NLP) community and have shown impressive empirical successes in the past few years. However, little is understood about how they reason and the limits of their computational capabilities. These models do not process data sequentially, and yet outperform sequential neural models such as RNNs. Recent work has shown that these models can compactly simulate the sequential reasoning abilities of deterministic finite automata (DFAs). This leads to the following question: can transformers simulate the reasoning of more complex finite state machines? In this work, we show that transformers can simulate weighted finite automata (WFAs), a class of models which subsumes DFAs, as well as weighted tree automata (WTA), a generalization of weighted automata to tree structured inputs. We prove these claims formally and provide upper bounds on the size of the transformer models needed as a function of the number of states of the target automata. Empirically, we perform synthetic experiments showing that transformers are able to learn these compact solutions via standard gradient-based training.} }
Endnote
%0 Conference Paper %T Simulating weighted automata over sequences and trees with transformers %A Michael Rizvi-Martel %A Maude Lizaire %A Clara Lacroce %A Guillaume Rabusseau %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-rizvi-martel24a %I PMLR %P 2368--2376 %U https://proceedings.mlr.press/v238/rizvi-martel24a.html %V 238 %X Transformers are ubiquitous models in the natural language processing (NLP) community and have shown impressive empirical successes in the past few years. However, little is understood about how they reason and the limits of their computational capabilities. These models do not process data sequentially, and yet outperform sequential neural models such as RNNs. Recent work has shown that these models can compactly simulate the sequential reasoning abilities of deterministic finite automata (DFAs). This leads to the following question: can transformers simulate the reasoning of more complex finite state machines? In this work, we show that transformers can simulate weighted finite automata (WFAs), a class of models which subsumes DFAs, as well as weighted tree automata (WTA), a generalization of weighted automata to tree structured inputs. We prove these claims formally and provide upper bounds on the size of the transformer models needed as a function of the number of states of the target automata. Empirically, we perform synthetic experiments showing that transformers are able to learn these compact solutions via standard gradient-based training.
APA
Rizvi-Martel, M., Lizaire, M., Lacroce, C. & Rabusseau, G.. (2024). Simulating weighted automata over sequences and trees with transformers. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2368-2376 Available from https://proceedings.mlr.press/v238/rizvi-martel24a.html.

Related Material