Domain Adaptation for Sequence Labeling Tasks with a Probabilistic Language Adaptation Model

Min Xiao, Yuhong Guo
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):293-301, 2013.

Abstract

In this paper, we propose to address the problem of domain adaptation for sequence labeling tasks via distributed representation learning by using a log-bilinear language adaptation model. The proposed neural probabilistic language model simultaneously models two different but related data distributions in the source and target domains based on induced distributed representations, which encode both generalizable and domain-specific latent features. We then use the learned dense real-valued representation as augmenting features for natural language processing systems. We empirically evaluate the proposed learning technique on WSJ and MEDLINE domains with POS tagging systems, and on WSJ and Brown corpora with syntactic chunking and name entity recognition systems. Our primary results show that the proposed domain adaptation method outperforms a number comparison methods for cross domain sequence labeling tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-xiao13, title = {Domain Adaptation for Sequence Labeling Tasks with a Probabilistic Language Adaptation Model}, author = {Xiao, Min and Guo, Yuhong}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {293--301}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/xiao13.pdf}, url = {https://proceedings.mlr.press/v28/xiao13.html}, abstract = {In this paper, we propose to address the problem of domain adaptation for sequence labeling tasks via distributed representation learning by using a log-bilinear language adaptation model. The proposed neural probabilistic language model simultaneously models two different but related data distributions in the source and target domains based on induced distributed representations, which encode both generalizable and domain-specific latent features. We then use the learned dense real-valued representation as augmenting features for natural language processing systems. We empirically evaluate the proposed learning technique on WSJ and MEDLINE domains with POS tagging systems, and on WSJ and Brown corpora with syntactic chunking and name entity recognition systems. Our primary results show that the proposed domain adaptation method outperforms a number comparison methods for cross domain sequence labeling tasks. } }
Endnote
%0 Conference Paper %T Domain Adaptation for Sequence Labeling Tasks with a Probabilistic Language Adaptation Model %A Min Xiao %A Yuhong Guo %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-xiao13 %I PMLR %P 293--301 %U https://proceedings.mlr.press/v28/xiao13.html %V 28 %N 1 %X In this paper, we propose to address the problem of domain adaptation for sequence labeling tasks via distributed representation learning by using a log-bilinear language adaptation model. The proposed neural probabilistic language model simultaneously models two different but related data distributions in the source and target domains based on induced distributed representations, which encode both generalizable and domain-specific latent features. We then use the learned dense real-valued representation as augmenting features for natural language processing systems. We empirically evaluate the proposed learning technique on WSJ and MEDLINE domains with POS tagging systems, and on WSJ and Brown corpora with syntactic chunking and name entity recognition systems. Our primary results show that the proposed domain adaptation method outperforms a number comparison methods for cross domain sequence labeling tasks.
RIS
TY - CPAPER TI - Domain Adaptation for Sequence Labeling Tasks with a Probabilistic Language Adaptation Model AU - Min Xiao AU - Yuhong Guo BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-xiao13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 293 EP - 301 L1 - http://proceedings.mlr.press/v28/xiao13.pdf UR - https://proceedings.mlr.press/v28/xiao13.html AB - In this paper, we propose to address the problem of domain adaptation for sequence labeling tasks via distributed representation learning by using a log-bilinear language adaptation model. The proposed neural probabilistic language model simultaneously models two different but related data distributions in the source and target domains based on induced distributed representations, which encode both generalizable and domain-specific latent features. We then use the learned dense real-valued representation as augmenting features for natural language processing systems. We empirically evaluate the proposed learning technique on WSJ and MEDLINE domains with POS tagging systems, and on WSJ and Brown corpora with syntactic chunking and name entity recognition systems. Our primary results show that the proposed domain adaptation method outperforms a number comparison methods for cross domain sequence labeling tasks. ER -
APA
Xiao, M. & Guo, Y.. (2013). Domain Adaptation for Sequence Labeling Tasks with a Probabilistic Language Adaptation Model. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):293-301 Available from https://proceedings.mlr.press/v28/xiao13.html.

Related Material