Open Problem: Learning sparse linear concepts by priming the features
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5937-5942, 2023.
Sparse linear problems can be learned well with online multiplicative updates. The question is weather there are closed form updates based on the past examples that can sample efficiently learn such sparse linear problems as well?We show experimentally that this can be achieved by applying linear least squares, then “priming” the ith features of the past instances by multiplying them by the ith linear least squares weight, and finally applying linear least squares a second time. However it is an open problem whether such priming methods have provably good regret bounds when applied online?