TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP

Nils Rethmeier, Vageesh Kumar Saxena, Isabelle Augenstein
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:440-449, 2020.

Abstract

While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training. Thus, for TX-Ray, we modify the established computer vision explainability principle of ‘visualizing preferred inputs of neurons’ to make it usable for both NLP and for transfer analysis. This allows one to analyze, track and quantify how self- or supervised NLP models first build knowledge abstractions in pretraining (1), andthen transfer abstractions to a new domain (2), or adapt them during supervised finetuning (3) – see Fig. 1. TX-Ray expresses neurons as feature preference distributions to quantify fine-grained knowledge transfer or adaptation and guide human analysis. We find that, similar to Lottery Ticket based pruning, TX-Ray based pruning can improve test set generalization and that it can reveal how early stages of self-supervision automatically learn linguistic abstractions like parts-of-speech.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-rethmeier20a, title = {TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP}, author = {Rethmeier, Nils and Kumar Saxena, Vageesh and Augenstein, Isabelle}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {440--449}, year = {2020}, editor = {Peters, Jonas and Sontag, David}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/rethmeier20a/rethmeier20a.pdf}, url = {https://proceedings.mlr.press/v124/rethmeier20a.html}, abstract = {While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training. Thus, for TX-Ray, we modify the established computer vision explainability principle of ‘visualizing preferred inputs of neurons’ to make it usable for both NLP and for transfer analysis. This allows one to analyze, track and quantify how self- or supervised NLP models first build knowledge abstractions in pretraining (1), andthen transfer abstractions to a new domain (2), or adapt them during supervised finetuning (3) – see Fig. 1. TX-Ray expresses neurons as feature preference distributions to quantify fine-grained knowledge transfer or adaptation and guide human analysis. We find that, similar to Lottery Ticket based pruning, TX-Ray based pruning can improve test set generalization and that it can reveal how early stages of self-supervision automatically learn linguistic abstractions like parts-of-speech.} }
Endnote
%0 Conference Paper %T TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP %A Nils Rethmeier %A Vageesh Kumar Saxena %A Isabelle Augenstein %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-rethmeier20a %I PMLR %P 440--449 %U https://proceedings.mlr.press/v124/rethmeier20a.html %V 124 %X While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training. Thus, for TX-Ray, we modify the established computer vision explainability principle of ‘visualizing preferred inputs of neurons’ to make it usable for both NLP and for transfer analysis. This allows one to analyze, track and quantify how self- or supervised NLP models first build knowledge abstractions in pretraining (1), andthen transfer abstractions to a new domain (2), or adapt them during supervised finetuning (3) – see Fig. 1. TX-Ray expresses neurons as feature preference distributions to quantify fine-grained knowledge transfer or adaptation and guide human analysis. We find that, similar to Lottery Ticket based pruning, TX-Ray based pruning can improve test set generalization and that it can reveal how early stages of self-supervision automatically learn linguistic abstractions like parts-of-speech.
APA
Rethmeier, N., Kumar Saxena, V. & Augenstein, I.. (2020). TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:440-449 Available from https://proceedings.mlr.press/v124/rethmeier20a.html.

Related Material