A Tensor Decomposition Perspective on Second-order RNNs

Maude Lizaire, Michael Rizvi-Martel, Marawan Gamal, Guillaume Rabusseau
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:32595-32611, 2024.

Abstract

Second-order Recurrent Neural Networks (2RNNs) extend RNNs by leveraging second-order interactions for sequence modelling. These models are provably more expressive than their first-order counterparts and have connections to well-studied models from formal language theory. However, their large parameter tensor makes computations intractable. To circumvent this issue, one approach known as MIRNN consists in limiting the type of interactions used by the model. Another is to leverage tensor decomposition to diminish the parameter count. In this work, we study the model resulting from parameterizing 2RNNs using the CP decomposition, which we call CPRNN. Intuitively, the rank of the decomposition should reduce expressivity. We analyze how rank and hidden size affect model capacity and show the relationships between RNNs, 2RNNs, MIRNNs, and CPRNNs based on these parameters. We support these results empirically with experiments on the Penn Treebank dataset which demonstrate that, with a fixed parameter budget, CPRNNs outperforms RNNs, 2RNNs, and MIRNNs with the right choice of rank and hidden size.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-lizaire24a, title = {A Tensor Decomposition Perspective on Second-order {RNN}s}, author = {Lizaire, Maude and Rizvi-Martel, Michael and Gamal, Marawan and Rabusseau, Guillaume}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {32595--32611}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/lizaire24a/lizaire24a.pdf}, url = {https://proceedings.mlr.press/v235/lizaire24a.html}, abstract = {Second-order Recurrent Neural Networks (2RNNs) extend RNNs by leveraging second-order interactions for sequence modelling. These models are provably more expressive than their first-order counterparts and have connections to well-studied models from formal language theory. However, their large parameter tensor makes computations intractable. To circumvent this issue, one approach known as MIRNN consists in limiting the type of interactions used by the model. Another is to leverage tensor decomposition to diminish the parameter count. In this work, we study the model resulting from parameterizing 2RNNs using the CP decomposition, which we call CPRNN. Intuitively, the rank of the decomposition should reduce expressivity. We analyze how rank and hidden size affect model capacity and show the relationships between RNNs, 2RNNs, MIRNNs, and CPRNNs based on these parameters. We support these results empirically with experiments on the Penn Treebank dataset which demonstrate that, with a fixed parameter budget, CPRNNs outperforms RNNs, 2RNNs, and MIRNNs with the right choice of rank and hidden size.} }
Endnote
%0 Conference Paper %T A Tensor Decomposition Perspective on Second-order RNNs %A Maude Lizaire %A Michael Rizvi-Martel %A Marawan Gamal %A Guillaume Rabusseau %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-lizaire24a %I PMLR %P 32595--32611 %U https://proceedings.mlr.press/v235/lizaire24a.html %V 235 %X Second-order Recurrent Neural Networks (2RNNs) extend RNNs by leveraging second-order interactions for sequence modelling. These models are provably more expressive than their first-order counterparts and have connections to well-studied models from formal language theory. However, their large parameter tensor makes computations intractable. To circumvent this issue, one approach known as MIRNN consists in limiting the type of interactions used by the model. Another is to leverage tensor decomposition to diminish the parameter count. In this work, we study the model resulting from parameterizing 2RNNs using the CP decomposition, which we call CPRNN. Intuitively, the rank of the decomposition should reduce expressivity. We analyze how rank and hidden size affect model capacity and show the relationships between RNNs, 2RNNs, MIRNNs, and CPRNNs based on these parameters. We support these results empirically with experiments on the Penn Treebank dataset which demonstrate that, with a fixed parameter budget, CPRNNs outperforms RNNs, 2RNNs, and MIRNNs with the right choice of rank and hidden size.
APA
Lizaire, M., Rizvi-Martel, M., Gamal, M. & Rabusseau, G.. (2024). A Tensor Decomposition Perspective on Second-order RNNs. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:32595-32611 Available from https://proceedings.mlr.press/v235/lizaire24a.html.

Related Material