Adversarial TableQA: Attention Supervision for Question Answering on Tables

Minseok Cho, Reinald Kim Amplayo, Seung-won Hwang, Jonghyuck Park
Proceedings of The 10th Asian Conference on Machine Learning, PMLR 95:391-406, 2018.

Abstract

The task of answering a question given a text passage has shown great developments on model performance thanks to community efforts in building useful datasets. Recently, there have been doubts whether such rapid progress has been based on truly understanding language. The same question has not been asked in the table question answering (TableQA) task, where we are tasked to answer a query given a table. We show that existing efforts, of using “answers” for both evaluation and supervision for TableQA, show deteriorating performances in adversarial settings of perturbations that do not affect the answer. This insight naturally motivates to develop new models that understand question and table more precisely. For this goal, we propose \textsc{Neural Operator (NeOp)}, a multi-layer sequential network with attention supervision to answer the query given a table. \textsc{NeOp} uses multiple Selective Recurrent Units (SelRUs) to further help the interpretability of the answers of the model. Experiments show that the use of operand information to train the model significantly improves the performance and interpretability of TableQA models. \textsc{NeOp} outperforms all the previous models by a big margin.

Cite this Paper


BibTeX
@InProceedings{pmlr-v95-cho18a, title = {Adversarial TableQA: Attention Supervision for Question Answering on Tables}, author = {Cho, Minseok and Amplayo, {Reinald Kim} and Hwang, Seung-won and Park, Jonghyuck}, booktitle = {Proceedings of The 10th Asian Conference on Machine Learning}, pages = {391--406}, year = {2018}, editor = {Zhu, Jun and Takeuchi, Ichiro}, volume = {95}, series = {Proceedings of Machine Learning Research}, month = {14--16 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v95/cho18a/cho18a.pdf}, url = {https://proceedings.mlr.press/v95/cho18a.html}, abstract = {The task of answering a question given a text passage has shown great developments on model performance thanks to community efforts in building useful datasets. Recently, there have been doubts whether such rapid progress has been based on truly understanding language. The same question has not been asked in the table question answering (TableQA) task, where we are tasked to answer a query given a table. We show that existing efforts, of using “answers” for both evaluation and supervision for TableQA, show deteriorating performances in adversarial settings of perturbations that do not affect the answer. This insight naturally motivates to develop new models that understand question and table more precisely. For this goal, we propose \textsc{Neural Operator (NeOp)}, a multi-layer sequential network with attention supervision to answer the query given a table. \textsc{NeOp} uses multiple Selective Recurrent Units (SelRUs) to further help the interpretability of the answers of the model. Experiments show that the use of operand information to train the model significantly improves the performance and interpretability of TableQA models. \textsc{NeOp} outperforms all the previous models by a big margin.} }
Endnote
%0 Conference Paper %T Adversarial TableQA: Attention Supervision for Question Answering on Tables %A Minseok Cho %A Reinald Kim Amplayo %A Seung-won Hwang %A Jonghyuck Park %B Proceedings of The 10th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jun Zhu %E Ichiro Takeuchi %F pmlr-v95-cho18a %I PMLR %P 391--406 %U https://proceedings.mlr.press/v95/cho18a.html %V 95 %X The task of answering a question given a text passage has shown great developments on model performance thanks to community efforts in building useful datasets. Recently, there have been doubts whether such rapid progress has been based on truly understanding language. The same question has not been asked in the table question answering (TableQA) task, where we are tasked to answer a query given a table. We show that existing efforts, of using “answers” for both evaluation and supervision for TableQA, show deteriorating performances in adversarial settings of perturbations that do not affect the answer. This insight naturally motivates to develop new models that understand question and table more precisely. For this goal, we propose \textsc{Neural Operator (NeOp)}, a multi-layer sequential network with attention supervision to answer the query given a table. \textsc{NeOp} uses multiple Selective Recurrent Units (SelRUs) to further help the interpretability of the answers of the model. Experiments show that the use of operand information to train the model significantly improves the performance and interpretability of TableQA models. \textsc{NeOp} outperforms all the previous models by a big margin.
APA
Cho, M., Amplayo, R.K., Hwang, S. & Park, J.. (2018). Adversarial TableQA: Attention Supervision for Question Answering on Tables. Proceedings of The 10th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 95:391-406 Available from https://proceedings.mlr.press/v95/cho18a.html.

Related Material