[edit]
Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:235-246, 2019.
Abstract
We present provable algorithms for learning linear representations which are trained in a supervised
fashion across a number of tasks. Furthermore, whereas previous methods in the context of multitask learning only allow for generalization within tasks that have already been observed, our
representations are both efficiently learnable and accompanied by generalization guarantees to
unseen tasks. Our method relies on a certain convex relaxation of a non-convex problem, making
it amenable to online learning procedures. We further ensure that a low-rank representation is
maintained, and we allow for various trade-offs between sample complexity and per-iteration cost,
depending on the choice of algorithm.