Decision-Making Under Selective Labels: Optimal Finite-Domain Policies and Beyond

Dennis Wei
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11035-11046, 2021.

Abstract

Selective labels are a common feature of high-stakes decision-making applications, referring to the lack of observed outcomes under one of the possible decisions. This paper studies the learning of decision policies in the face of selective labels, in an online setting that balances learning costs against future utility. In the homogeneous case in which individuals’ features are disregarded, the optimal decision policy is shown to be a threshold policy. The threshold becomes more stringent as more labels are collected; the rate at which this occurs is characterized. In the case of features drawn from a finite domain, the optimal policy consists of multiple homogeneous policies in parallel. For the general infinite-domain case, the homogeneous policy is extended by using a probabilistic classifier and bootstrapping to provide its inputs. In experiments on synthetic and real data, the proposed policies achieve consistently superior utility with no parameter tuning in the finite-domain case and lower parameter sensitivity in the general case.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-wei21a, title = {Decision-Making Under Selective Labels: Optimal Finite-Domain Policies and Beyond}, author = {Wei, Dennis}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11035--11046}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/wei21a/wei21a.pdf}, url = {https://proceedings.mlr.press/v139/wei21a.html}, abstract = {Selective labels are a common feature of high-stakes decision-making applications, referring to the lack of observed outcomes under one of the possible decisions. This paper studies the learning of decision policies in the face of selective labels, in an online setting that balances learning costs against future utility. In the homogeneous case in which individuals’ features are disregarded, the optimal decision policy is shown to be a threshold policy. The threshold becomes more stringent as more labels are collected; the rate at which this occurs is characterized. In the case of features drawn from a finite domain, the optimal policy consists of multiple homogeneous policies in parallel. For the general infinite-domain case, the homogeneous policy is extended by using a probabilistic classifier and bootstrapping to provide its inputs. In experiments on synthetic and real data, the proposed policies achieve consistently superior utility with no parameter tuning in the finite-domain case and lower parameter sensitivity in the general case.} }
Endnote
%0 Conference Paper %T Decision-Making Under Selective Labels: Optimal Finite-Domain Policies and Beyond %A Dennis Wei %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-wei21a %I PMLR %P 11035--11046 %U https://proceedings.mlr.press/v139/wei21a.html %V 139 %X Selective labels are a common feature of high-stakes decision-making applications, referring to the lack of observed outcomes under one of the possible decisions. This paper studies the learning of decision policies in the face of selective labels, in an online setting that balances learning costs against future utility. In the homogeneous case in which individuals’ features are disregarded, the optimal decision policy is shown to be a threshold policy. The threshold becomes more stringent as more labels are collected; the rate at which this occurs is characterized. In the case of features drawn from a finite domain, the optimal policy consists of multiple homogeneous policies in parallel. For the general infinite-domain case, the homogeneous policy is extended by using a probabilistic classifier and bootstrapping to provide its inputs. In experiments on synthetic and real data, the proposed policies achieve consistently superior utility with no parameter tuning in the finite-domain case and lower parameter sensitivity in the general case.
APA
Wei, D.. (2021). Decision-Making Under Selective Labels: Optimal Finite-Domain Policies and Beyond. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11035-11046 Available from https://proceedings.mlr.press/v139/wei21a.html.

Related Material