Adaptive Adversarial Multi-task Representation Learning

Yuren Mao, Weiwei Liu, Xuemin Lin
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6724-6733, 2020.

Abstract

Adversarial Multi-task Representation Learning (AMTRL) methods are able to boost the performance of Multi-task Representation Learning (MTRL) models. However, the theoretical mechanism behind AMTRL is less investigated. To fill this gap, we study the generalization error bound of AMTRL through the lens of Lagrangian duality . Based on the duality, we proposed an novel adaptive AMTRL algorithm which improves the performance of original AMTRL methods. The extensive experiments back up our theoretical analysis and validate the superiority of our proposed algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-mao20a, title = {Adaptive Adversarial Multi-task Representation Learning}, author = {Mao, Yuren and Liu, Weiwei and Lin, Xuemin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6724--6733}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/mao20a/mao20a.pdf}, url = {https://proceedings.mlr.press/v119/mao20a.html}, abstract = {Adversarial Multi-task Representation Learning (AMTRL) methods are able to boost the performance of Multi-task Representation Learning (MTRL) models. However, the theoretical mechanism behind AMTRL is less investigated. To fill this gap, we study the generalization error bound of AMTRL through the lens of Lagrangian duality . Based on the duality, we proposed an novel adaptive AMTRL algorithm which improves the performance of original AMTRL methods. The extensive experiments back up our theoretical analysis and validate the superiority of our proposed algorithm.} }
Endnote
%0 Conference Paper %T Adaptive Adversarial Multi-task Representation Learning %A Yuren Mao %A Weiwei Liu %A Xuemin Lin %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-mao20a %I PMLR %P 6724--6733 %U https://proceedings.mlr.press/v119/mao20a.html %V 119 %X Adversarial Multi-task Representation Learning (AMTRL) methods are able to boost the performance of Multi-task Representation Learning (MTRL) models. However, the theoretical mechanism behind AMTRL is less investigated. To fill this gap, we study the generalization error bound of AMTRL through the lens of Lagrangian duality . Based on the duality, we proposed an novel adaptive AMTRL algorithm which improves the performance of original AMTRL methods. The extensive experiments back up our theoretical analysis and validate the superiority of our proposed algorithm.
APA
Mao, Y., Liu, W. & Lin, X.. (2020). Adaptive Adversarial Multi-task Representation Learning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6724-6733 Available from https://proceedings.mlr.press/v119/mao20a.html.

Related Material