On Learning Decision Heuristics

Özgür Şimşek, Marcus Buckmann
Proceedings of the NIPS 2016 Workshop on Imperfect Decision Makers, PMLR 58:75-85, 2017.

Abstract

Decision heuristics are simple models of human and animal decision making that use few pieces of information and combine the pieces in simple ways, for example, by giving them equal weight or by considering them sequentially. We examine how decision heuristics can be learned—and modified—as additional training examples become available. In particular, we examine how additional training examples change the variance in parameter estimates of the heuristic. Our analysis suggests new decision heuristics, including a family of heuristics that generalizes two well-known families: lexicographic heuristics and tallying. We evaluate the empirical performance of these heuristics in a large, diverse collection of data sets. The supplementary material provides details on the random forest implementation and describes the 56 public data sets used in the empirical analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v58-simsek17a, title = {On Learning Decision Heuristics}, author = {Şimşek, Özgür and Buckmann, Marcus}, booktitle = {Proceedings of the NIPS 2016 Workshop on Imperfect Decision Makers}, pages = {75--85}, year = {2017}, editor = {Guy, Tatiana V. and Kárný, Miroslav and Rios-Insua, David and Wolpert, David H.}, volume = {58}, series = {Proceedings of Machine Learning Research}, month = {09 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v58/simsek17a/simsek17a.pdf}, url = {https://proceedings.mlr.press/v58/simsek17a.html}, abstract = {Decision heuristics are simple models of human and animal decision making that use few pieces of information and combine the pieces in simple ways, for example, by giving them equal weight or by considering them sequentially. We examine how decision heuristics can be learned—and modified—as additional training examples become available. In particular, we examine how additional training examples change the variance in parameter estimates of the heuristic. Our analysis suggests new decision heuristics, including a family of heuristics that generalizes two well-known families: lexicographic heuristics and tallying. We evaluate the empirical performance of these heuristics in a large, diverse collection of data sets. The supplementary material provides details on the random forest implementation and describes the 56 public data sets used in the empirical analysis. } }
Endnote
%0 Conference Paper %T On Learning Decision Heuristics %A Özgür Şimşek %A Marcus Buckmann %B Proceedings of the NIPS 2016 Workshop on Imperfect Decision Makers %C Proceedings of Machine Learning Research %D 2017 %E Tatiana V. Guy %E Miroslav Kárný %E David Rios-Insua %E David H. Wolpert %F pmlr-v58-simsek17a %I PMLR %P 75--85 %U https://proceedings.mlr.press/v58/simsek17a.html %V 58 %X Decision heuristics are simple models of human and animal decision making that use few pieces of information and combine the pieces in simple ways, for example, by giving them equal weight or by considering them sequentially. We examine how decision heuristics can be learned—and modified—as additional training examples become available. In particular, we examine how additional training examples change the variance in parameter estimates of the heuristic. Our analysis suggests new decision heuristics, including a family of heuristics that generalizes two well-known families: lexicographic heuristics and tallying. We evaluate the empirical performance of these heuristics in a large, diverse collection of data sets. The supplementary material provides details on the random forest implementation and describes the 56 public data sets used in the empirical analysis.
APA
Şimşek, Ö. & Buckmann, M.. (2017). On Learning Decision Heuristics. Proceedings of the NIPS 2016 Workshop on Imperfect Decision Makers, in Proceedings of Machine Learning Research 58:75-85 Available from https://proceedings.mlr.press/v58/simsek17a.html.

Related Material