Open Problem: Learning sparse linear concepts by priming the features

Manfred K. Warmuth, Ehsan Amid
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5937-5942, 2023.

Abstract

Sparse linear problems can be learned well with online multiplicative updates. The question is weather there are closed form updates based on the past examples that can sample efficiently learn such sparse linear problems as well?We show experimentally that this can be achieved by applying linear least squares, then “priming” the ith features of the past instances by multiplying them by the ith linear least squares weight, and finally applying linear least squares a second time. However it is an open problem whether such priming methods have provably good regret bounds when applied online?

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-warmuth23a, title = {Open Problem: Learning sparse linear concepts by priming the features}, author = {Warmuth, Manfred K. and Amid, Ehsan}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {5937--5942}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/warmuth23a/warmuth23a.pdf}, url = {https://proceedings.mlr.press/v195/warmuth23a.html}, abstract = {Sparse linear problems can be learned well with online multiplicative updates. The question is weather there are closed form updates based on the past examples that can sample efficiently learn such sparse linear problems as well?We show experimentally that this can be achieved by applying linear least squares, then “priming” the ith features of the past instances by multiplying them by the ith linear least squares weight, and finally applying linear least squares a second time. However it is an open problem whether such priming methods have provably good regret bounds when applied online?} }
Endnote
%0 Conference Paper %T Open Problem: Learning sparse linear concepts by priming the features %A Manfred K. Warmuth %A Ehsan Amid %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-warmuth23a %I PMLR %P 5937--5942 %U https://proceedings.mlr.press/v195/warmuth23a.html %V 195 %X Sparse linear problems can be learned well with online multiplicative updates. The question is weather there are closed form updates based on the past examples that can sample efficiently learn such sparse linear problems as well?We show experimentally that this can be achieved by applying linear least squares, then “priming” the ith features of the past instances by multiplying them by the ith linear least squares weight, and finally applying linear least squares a second time. However it is an open problem whether such priming methods have provably good regret bounds when applied online?
APA
Warmuth, M.K. & Amid, E.. (2023). Open Problem: Learning sparse linear concepts by priming the features. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:5937-5942 Available from https://proceedings.mlr.press/v195/warmuth23a.html.

Related Material