Towards a Deep and Unified Understanding of Deep Neural Models in NLP

Chaoyu Guan, Xiting Wang, Quanshi Zhang, Runjin Chen, Di He, Xing Xie
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2454-2463, 2019.

Abstract

We define a unified information-based measure to provide quantitative explanations on how intermediate layers of deep Natural Language Processing (NLP) models leverage information of input words. Our method advances existing explanation methods by addressing issues in coherency and generality. Explanations generated by using our method are consistent and faithful across different timestamps, layers, and models. We show how our method can be applied to four widely used models in NLP and explain their performances on three real-world benchmark datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-guan19a, title = {Towards a Deep and Unified Understanding of Deep Neural Models in {NLP}}, author = {Guan, Chaoyu and Wang, Xiting and Zhang, Quanshi and Chen, Runjin and He, Di and Xie, Xing}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2454--2463}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/guan19a/guan19a.pdf}, url = {https://proceedings.mlr.press/v97/guan19a.html}, abstract = {We define a unified information-based measure to provide quantitative explanations on how intermediate layers of deep Natural Language Processing (NLP) models leverage information of input words. Our method advances existing explanation methods by addressing issues in coherency and generality. Explanations generated by using our method are consistent and faithful across different timestamps, layers, and models. We show how our method can be applied to four widely used models in NLP and explain their performances on three real-world benchmark datasets.} }
Endnote
%0 Conference Paper %T Towards a Deep and Unified Understanding of Deep Neural Models in NLP %A Chaoyu Guan %A Xiting Wang %A Quanshi Zhang %A Runjin Chen %A Di He %A Xing Xie %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-guan19a %I PMLR %P 2454--2463 %U https://proceedings.mlr.press/v97/guan19a.html %V 97 %X We define a unified information-based measure to provide quantitative explanations on how intermediate layers of deep Natural Language Processing (NLP) models leverage information of input words. Our method advances existing explanation methods by addressing issues in coherency and generality. Explanations generated by using our method are consistent and faithful across different timestamps, layers, and models. We show how our method can be applied to four widely used models in NLP and explain their performances on three real-world benchmark datasets.
APA
Guan, C., Wang, X., Zhang, Q., Chen, R., He, D. & Xie, X.. (2019). Towards a Deep and Unified Understanding of Deep Neural Models in NLP. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2454-2463 Available from https://proceedings.mlr.press/v97/guan19a.html.

Related Material