Which Tasks Should Be Learned Together in Multi-task Learning?

Trevor Standley, Amir Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, Silvio Savarese
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9120-9132, 2020.

Abstract

Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using multi-task learning. This can save computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives can compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We study task cooperation and competition in several different learning settings and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-standley20a, title = {Which Tasks Should Be Learned Together in Multi-task Learning?}, author = {Standley, Trevor and Zamir, Amir and Chen, Dawn and Guibas, Leonidas and Malik, Jitendra and Savarese, Silvio}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9120--9132}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/standley20a/standley20a.pdf}, url = {https://proceedings.mlr.press/v119/standley20a.html}, abstract = {Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using multi-task learning. This can save computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives can compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We study task cooperation and competition in several different learning settings and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks.} }
Endnote
%0 Conference Paper %T Which Tasks Should Be Learned Together in Multi-task Learning? %A Trevor Standley %A Amir Zamir %A Dawn Chen %A Leonidas Guibas %A Jitendra Malik %A Silvio Savarese %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-standley20a %I PMLR %P 9120--9132 %U https://proceedings.mlr.press/v119/standley20a.html %V 119 %X Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using multi-task learning. This can save computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives can compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We study task cooperation and competition in several different learning settings and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks.
APA
Standley, T., Zamir, A., Chen, D., Guibas, L., Malik, J. & Savarese, S.. (2020). Which Tasks Should Be Learned Together in Multi-task Learning?. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9120-9132 Available from https://proceedings.mlr.press/v119/standley20a.html.

Related Material