[edit]
Excess risk bounds for multitask learning with trace norm regularization
Proceedings of the 26th Annual Conference on Learning Theory, PMLR 30:55-76, 2013.
Abstract
Trace norm regularization is a popular method of multitask learning. We give excess risk bounds with explicit dependence on the number of tasks, the number of examples per task and properties of the data distribution. The bounds are independent of the dimension of the input space, which may be infinite as in the case of reproducing kernel Hilbert spaces. A byproduct of the proof are bounds on the expected norm of sums of random positive semidefinite matrices with subexponential moments.