[edit]
TAYSIR Competition: Transformer+\textscrnn: Algorithms to Yield Simple and Interpretable Representations
Proceedings of 16th edition of the International Conference on Grammatical Inference, PMLR 217:275-290, 2023.
Abstract
This article presents the content of the competition Transformers+\textsc{rnn}: Algorithms to Yield Simple and Interpretable Representations (TAYSIR, the Arabic word for ‘simple’), which was an on-line challenge on extracting simpler models from already trained neural networks held in Spring 2023. These neural nets were trained on sequential categorial/symbolic data. Some of these data were artificial, some came from real world problems (such as Natural Language Processing, Bioinformatics, and Software Engineering). The trained models covered a large spectrum of architectures, from Simple Recurrent Neural Network (SRN) to Transformers, including Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM). No constraint was given on the surrogate models submitted by the participants: any model working on sequential data was accepted. Two tracks were proposed: neural networks trained on Binary Classification tasks, and on Language Modeling tasks. The evaluation of the surrogate models took into account both the simplicity of the extracted model and the quality of the approximation of the original model.