Regularization in directable environments with application to Tetris

Jan Malte Lichtenberg, Özgür Şimşek
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3953-3962, 2019.

Abstract

Learning from small data sets is difficult in the absence of specific domain knowledge. We present a regularized linear model called STEW that benefits from a generic and prevalent form of prior knowledge: feature directions. STEW shrinks weights toward each other, converging to an equal-weights solution in the limit of infinite regularization. We provide theoretical results on the equal-weights solution that explains how STEW can productively trade-off bias and variance. Across a wide range of learning problems, including Tetris, STEW outperformed existing linear models, including ridge regression, the Lasso, and the non-negative Lasso, when feature directions were known. The model proved to be robust to unreliable (or absent) feature directions, still outperforming alternative models under diverse conditions. Our results in Tetris were obtained by using a novel approach to learning in sequential decision environments based on multinomial logistic regression.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-lichtenberg19a, title = {Regularization in directable environments with application to Tetris}, author = {Lichtenberg, Jan Malte and {\c{S}im\c{s}ek}, {\"{O}}zg\"{u}r}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3953--3962}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/lichtenberg19a/lichtenberg19a.pdf}, url = {https://proceedings.mlr.press/v97/lichtenberg19a.html}, abstract = {Learning from small data sets is difficult in the absence of specific domain knowledge. We present a regularized linear model called STEW that benefits from a generic and prevalent form of prior knowledge: feature directions. STEW shrinks weights toward each other, converging to an equal-weights solution in the limit of infinite regularization. We provide theoretical results on the equal-weights solution that explains how STEW can productively trade-off bias and variance. Across a wide range of learning problems, including Tetris, STEW outperformed existing linear models, including ridge regression, the Lasso, and the non-negative Lasso, when feature directions were known. The model proved to be robust to unreliable (or absent) feature directions, still outperforming alternative models under diverse conditions. Our results in Tetris were obtained by using a novel approach to learning in sequential decision environments based on multinomial logistic regression.} }
Endnote
%0 Conference Paper %T Regularization in directable environments with application to Tetris %A Jan Malte Lichtenberg %A Özgür Şimşek %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-lichtenberg19a %I PMLR %P 3953--3962 %U https://proceedings.mlr.press/v97/lichtenberg19a.html %V 97 %X Learning from small data sets is difficult in the absence of specific domain knowledge. We present a regularized linear model called STEW that benefits from a generic and prevalent form of prior knowledge: feature directions. STEW shrinks weights toward each other, converging to an equal-weights solution in the limit of infinite regularization. We provide theoretical results on the equal-weights solution that explains how STEW can productively trade-off bias and variance. Across a wide range of learning problems, including Tetris, STEW outperformed existing linear models, including ridge regression, the Lasso, and the non-negative Lasso, when feature directions were known. The model proved to be robust to unreliable (or absent) feature directions, still outperforming alternative models under diverse conditions. Our results in Tetris were obtained by using a novel approach to learning in sequential decision environments based on multinomial logistic regression.
APA
Lichtenberg, J.M. & Şimşek, Ö.. (2019). Regularization in directable environments with application to Tetris. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3953-3962 Available from https://proceedings.mlr.press/v97/lichtenberg19a.html.

Related Material