[edit]
Combining Graph and Recurrent Networks for Efficient and Effective Segment Tagging
Proceedings of the First Learning on Graphs Conference, PMLR 198:41:1-41:14, 2022.
Abstract
Graph Neural Networks have been demonstrated to be highly effective and efficient in learning relationships between nodes locally and globally. Also, they are suitable for documents-related tasks due to their flexibility and capacity of adapting to complex layouts. However, information extraction on documents still remains a challenge, especially when dealing with unstructured documents. The semantic tagging of the text segments (a.k.a. entity tagging) is one of the essential tasks. In this paper we present SeqGraph, a new model that combines Transformers for text feature extraction, and Graph Neural Networks and recurrent layers for segments interaction, for an efficient and effective segment tagging. We address some of the limitations of current architectures and Transformer-based solutions. We optimize the model architecture by combining Graph Attention layers (GAT) and Gated Recurrent Units (GRUs), and we provide an ablation study on the design choices to demonstrate the effectiveness of SeqGraph. The proposed model is extremely light (4 million parameters), reducing the number of parameters between 100- and 200-times compared to its competitors, while achieving state-of-the-art results (97.23% F1 score on CORD dataset).