Forward and Backward Knowledge Transfer for Sentiment Classification

Hao Wang, Bing Liu, Shuai Wang, Nianzu Ma, Yan Yang
Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:457-472, 2019.

Abstract

This paper studies the problem of learning a sequence of sentiment classification tasks. The learned knowledge from each task is retained and later used to help future or subsequent task learning. This learning paradigm is called \textit{lifelong learning}. However, existing lifelong learning methods either only transfer knowledge forward to help future learning and do not go back to improve the model of a previous task or require the training data of the previous task to retrain its model to exploit backward/reverse knowledge transfer. This paper studies reverse knowledge transfer of lifelong learning. It aims to improve the model of a previous task by leveraging future knowledge without retraining using its training data, which is challenging now. In this work, this is done by exploiting a key characteristic of the generative model of naïve Bayes. That is, it is possible to improve the naïve Bayesian classifier for a task by improving its model parameters directly using the retained knowledge from other tasks. Experimental results show that the proposed method markedly outperforms existing lifelong learning baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v101-wang19f, title = {Forward and Backward Knowledge Transfer for Sentiment Classification}, author = {Wang, Hao and Liu, Bing and Wang, Shuai and Ma, Nianzu and Yang, Yan}, booktitle = {Proceedings of The Eleventh Asian Conference on Machine Learning}, pages = {457--472}, year = {2019}, editor = {Lee, Wee Sun and Suzuki, Taiji}, volume = {101}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v101/wang19f/wang19f.pdf}, url = {https://proceedings.mlr.press/v101/wang19f.html}, abstract = {This paper studies the problem of learning a sequence of sentiment classification tasks. The learned knowledge from each task is retained and later used to help future or subsequent task learning. This learning paradigm is called \textit{lifelong learning}. However, existing lifelong learning methods either only transfer knowledge forward to help future learning and do not go back to improve the model of a previous task or require the training data of the previous task to retrain its model to exploit backward/reverse knowledge transfer. This paper studies reverse knowledge transfer of lifelong learning. It aims to improve the model of a previous task by leveraging future knowledge without retraining using its training data, which is challenging now. In this work, this is done by exploiting a key characteristic of the generative model of naïve Bayes. That is, it is possible to improve the naïve Bayesian classifier for a task by improving its model parameters directly using the retained knowledge from other tasks. Experimental results show that the proposed method markedly outperforms existing lifelong learning baselines.} }
Endnote
%0 Conference Paper %T Forward and Backward Knowledge Transfer for Sentiment Classification %A Hao Wang %A Bing Liu %A Shuai Wang %A Nianzu Ma %A Yan Yang %B Proceedings of The Eleventh Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Wee Sun Lee %E Taiji Suzuki %F pmlr-v101-wang19f %I PMLR %P 457--472 %U https://proceedings.mlr.press/v101/wang19f.html %V 101 %X This paper studies the problem of learning a sequence of sentiment classification tasks. The learned knowledge from each task is retained and later used to help future or subsequent task learning. This learning paradigm is called \textit{lifelong learning}. However, existing lifelong learning methods either only transfer knowledge forward to help future learning and do not go back to improve the model of a previous task or require the training data of the previous task to retrain its model to exploit backward/reverse knowledge transfer. This paper studies reverse knowledge transfer of lifelong learning. It aims to improve the model of a previous task by leveraging future knowledge without retraining using its training data, which is challenging now. In this work, this is done by exploiting a key characteristic of the generative model of naïve Bayes. That is, it is possible to improve the naïve Bayesian classifier for a task by improving its model parameters directly using the retained knowledge from other tasks. Experimental results show that the proposed method markedly outperforms existing lifelong learning baselines.
APA
Wang, H., Liu, B., Wang, S., Ma, N. & Yang, Y.. (2019). Forward and Backward Knowledge Transfer for Sentiment Classification. Proceedings of The Eleventh Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 101:457-472 Available from https://proceedings.mlr.press/v101/wang19f.html.

Related Material