Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing

Antoine Bordes, Xavier Glorot, Jason Weston, Yoshua Bengio
; Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:127-135, 2012.

Abstract

Open-text semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR - a formal representation of its sense). Unfortunately, large scale systems cannot be easily machine-learned due to lack of directly supervised data. We propose a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words mapped to more than 40,000 entities) thanks to a training scheme that combines learning from knowledge bases (e.g. WordNet) with learning from raw text. The model jointly learns representations of words, entities and MRs via a multi-task training process operating on these diverse sources of data. Hence, the system ends up providing methods for knowledge acquisition and word-sense disambiguation within the context of semantic parsing in a single elegant framework. Experiments on these various tasks indicate the promise of the approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-bordes12, title = {Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing}, author = {Antoine Bordes and Xavier Glorot and Jason Weston and Yoshua Bengio}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {127--135}, year = {2012}, editor = {Neil D. Lawrence and Mark Girolami}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/bordes12/bordes12.pdf}, url = {http://proceedings.mlr.press/v22/bordes12.html}, abstract = {Open-text semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR - a formal representation of its sense). Unfortunately, large scale systems cannot be easily machine-learned due to lack of directly supervised data. We propose a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words mapped to more than 40,000 entities) thanks to a training scheme that combines learning from knowledge bases (e.g. WordNet) with learning from raw text. The model jointly learns representations of words, entities and MRs via a multi-task training process operating on these diverse sources of data. Hence, the system ends up providing methods for knowledge acquisition and word-sense disambiguation within the context of semantic parsing in a single elegant framework. Experiments on these various tasks indicate the promise of the approach.} }
Endnote
%0 Conference Paper %T Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing %A Antoine Bordes %A Xavier Glorot %A Jason Weston %A Yoshua Bengio %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-bordes12 %I PMLR %J Proceedings of Machine Learning Research %P 127--135 %U http://proceedings.mlr.press %V 22 %W PMLR %X Open-text semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR - a formal representation of its sense). Unfortunately, large scale systems cannot be easily machine-learned due to lack of directly supervised data. We propose a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words mapped to more than 40,000 entities) thanks to a training scheme that combines learning from knowledge bases (e.g. WordNet) with learning from raw text. The model jointly learns representations of words, entities and MRs via a multi-task training process operating on these diverse sources of data. Hence, the system ends up providing methods for knowledge acquisition and word-sense disambiguation within the context of semantic parsing in a single elegant framework. Experiments on these various tasks indicate the promise of the approach.
RIS
TY - CPAPER TI - Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing AU - Antoine Bordes AU - Xavier Glorot AU - Jason Weston AU - Yoshua Bengio BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics PY - 2012/03/21 DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-bordes12 PB - PMLR SP - 127 DP - PMLR EP - 135 L1 - http://proceedings.mlr.press/v22/bordes12/bordes12.pdf UR - http://proceedings.mlr.press/v22/bordes12.html AB - Open-text semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR - a formal representation of its sense). Unfortunately, large scale systems cannot be easily machine-learned due to lack of directly supervised data. We propose a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words mapped to more than 40,000 entities) thanks to a training scheme that combines learning from knowledge bases (e.g. WordNet) with learning from raw text. The model jointly learns representations of words, entities and MRs via a multi-task training process operating on these diverse sources of data. Hence, the system ends up providing methods for knowledge acquisition and word-sense disambiguation within the context of semantic parsing in a single elegant framework. Experiments on these various tasks indicate the promise of the approach. ER -
APA
Bordes, A., Glorot, X., Weston, J. & Bengio, Y.. (2012). Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in PMLR 22:127-135

Related Material