Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization

Debabrata Mahapatra, Vaibhav Rajan
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6597-6607, 2020.

Abstract

Multi-Task Learning (MTL) is a well established paradigm for jointly learning models for multiple correlated tasks. Often the tasks conflict, requiring trade-offs between them during optimization. In such cases, multi-objective optimization based MTL methods can be used to find one or more Pareto optimal solutions. A common requirement in MTL applications, that cannot be addressed by these methods, is to find a solution satisfying userspecified preferences with respect to task-specific losses. We advance the state-of-the-art by developing the first gradient-based multi-objective MTL algorithm to solve this problem. Our unique approach combines multiple gradient descent with carefully controlled ascent to traverse the Pareto front in a principled manner, which also makes it robust to initialization. The scalability of our algorithm enables its use in large-scale deep networks for MTL. Assuming only differentiability of the task-specific loss functions, we provide theoretical guarantees for convergence. Our experiments show that our algorithm outperforms the best competing methods on benchmark datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-mahapatra20a, title = {Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization}, author = {Mahapatra, Debabrata and Rajan, Vaibhav}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6597--6607}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/mahapatra20a/mahapatra20a.pdf}, url = {https://proceedings.mlr.press/v119/mahapatra20a.html}, abstract = {Multi-Task Learning (MTL) is a well established paradigm for jointly learning models for multiple correlated tasks. Often the tasks conflict, requiring trade-offs between them during optimization. In such cases, multi-objective optimization based MTL methods can be used to find one or more Pareto optimal solutions. A common requirement in MTL applications, that cannot be addressed by these methods, is to find a solution satisfying userspecified preferences with respect to task-specific losses. We advance the state-of-the-art by developing the first gradient-based multi-objective MTL algorithm to solve this problem. Our unique approach combines multiple gradient descent with carefully controlled ascent to traverse the Pareto front in a principled manner, which also makes it robust to initialization. The scalability of our algorithm enables its use in large-scale deep networks for MTL. Assuming only differentiability of the task-specific loss functions, we provide theoretical guarantees for convergence. Our experiments show that our algorithm outperforms the best competing methods on benchmark datasets.} }
Endnote
%0 Conference Paper %T Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization %A Debabrata Mahapatra %A Vaibhav Rajan %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-mahapatra20a %I PMLR %P 6597--6607 %U https://proceedings.mlr.press/v119/mahapatra20a.html %V 119 %X Multi-Task Learning (MTL) is a well established paradigm for jointly learning models for multiple correlated tasks. Often the tasks conflict, requiring trade-offs between them during optimization. In such cases, multi-objective optimization based MTL methods can be used to find one or more Pareto optimal solutions. A common requirement in MTL applications, that cannot be addressed by these methods, is to find a solution satisfying userspecified preferences with respect to task-specific losses. We advance the state-of-the-art by developing the first gradient-based multi-objective MTL algorithm to solve this problem. Our unique approach combines multiple gradient descent with carefully controlled ascent to traverse the Pareto front in a principled manner, which also makes it robust to initialization. The scalability of our algorithm enables its use in large-scale deep networks for MTL. Assuming only differentiability of the task-specific loss functions, we provide theoretical guarantees for convergence. Our experiments show that our algorithm outperforms the best competing methods on benchmark datasets.
APA
Mahapatra, D. & Rajan, V.. (2020). Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6597-6607 Available from https://proceedings.mlr.press/v119/mahapatra20a.html.

Related Material