Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning

Brian Bullins, Elad Hazan, Adam Kalai, Roi Livni
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:235-246, 2019.

Abstract

We present provable algorithms for learning linear representations which are trained in a supervised fashion across a number of tasks. Furthermore, whereas previous methods in the context of multitask learning only allow for generalization within tasks that have already been observed, our representations are both efficiently learnable and accompanied by generalization guarantees to unseen tasks. Our method relies on a certain convex relaxation of a non-convex problem, making it amenable to online learning procedures. We further ensure that a low-rank representation is maintained, and we allow for various trade-offs between sample complexity and per-iteration cost, depending on the choice of algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v98-bullins19a, title = {Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning}, author = {Bullins, Brian and Hazan, Elad and Kalai, Adam and Livni, Roi}, booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory}, pages = {235--246}, year = {2019}, editor = {Garivier, Aurélien and Kale, Satyen}, volume = {98}, series = {Proceedings of Machine Learning Research}, month = {22--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v98/bullins19a/bullins19a.pdf}, url = {https://proceedings.mlr.press/v98/bullins19a.html}, abstract = {We present provable algorithms for learning linear representations which are trained in a supervised fashion across a number of tasks. Furthermore, whereas previous methods in the context of multitask learning only allow for generalization within tasks that have already been observed, our representations are both efficiently learnable and accompanied by generalization guarantees to unseen tasks. Our method relies on a certain convex relaxation of a non-convex problem, making it amenable to online learning procedures. We further ensure that a low-rank representation is maintained, and we allow for various trade-offs between sample complexity and per-iteration cost, depending on the choice of algorithm.} }
Endnote
%0 Conference Paper %T Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning %A Brian Bullins %A Elad Hazan %A Adam Kalai %A Roi Livni %B Proceedings of the 30th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Aurélien Garivier %E Satyen Kale %F pmlr-v98-bullins19a %I PMLR %P 235--246 %U https://proceedings.mlr.press/v98/bullins19a.html %V 98 %X We present provable algorithms for learning linear representations which are trained in a supervised fashion across a number of tasks. Furthermore, whereas previous methods in the context of multitask learning only allow for generalization within tasks that have already been observed, our representations are both efficiently learnable and accompanied by generalization guarantees to unseen tasks. Our method relies on a certain convex relaxation of a non-convex problem, making it amenable to online learning procedures. We further ensure that a low-rank representation is maintained, and we allow for various trade-offs between sample complexity and per-iteration cost, depending on the choice of algorithm.
APA
Bullins, B., Hazan, E., Kalai, A. & Livni, R.. (2019). Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning. Proceedings of the 30th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 98:235-246 Available from https://proceedings.mlr.press/v98/bullins19a.html.

Related Material