Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:242-250, 2011.
Abstract
This paper studies optimal price learning for one or more items. We introduce the Schr\""odinger price experiment (SPE) which superimposes classical price experiments using lotteries, and thereby extracts more information from each customer interaction. If buyers are perfectly rational we show that there exist SPEs that in the limit of infinite superposition learn optimally \emphand exploit optimally. We refer to the new resulting mechanism as the hopeful mechanism (HM) since although it is incentive compatible, buyers can deviate with extreme consequences for the seller at very little cost to themselves. For real-world settings we propose a robust version of the approach which takes the form of a Markov decision process where the actions are functions. We provide approximate policies motivated by the best of sampled set (BOSS) algorithm coupled with approximate Bayesian inference. Numerical studies show that the proposed method significantly increases seller revenue compared to classical price experimentation, even for the single-item case. [pdf]
@InProceedings{pmlr-v15-dance11a,
title = {Optimal and Robust Price Experimentation: Learning by Lottery},
author = {Christopher Dance and Onno Zoeter},
booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics},
pages = {242--250},
year = {2011},
editor = {Geoffrey Gordon and David Dunson and Miroslav Dudík},
volume = {15},
series = {Proceedings of Machine Learning Research},
address = {Fort Lauderdale, FL, USA},
month = {11--13 Apr},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v15/dance11a/dance11a.pdf},
url = {http://proceedings.mlr.press/v15/dance11a.html},
abstract = {This paper studies optimal price learning for one or more items. We introduce the Schr\""odinger price experiment (SPE) which superimposes classical price experiments using lotteries, and thereby extracts more information from each customer interaction. If buyers are perfectly rational we show that there exist SPEs that in the limit of infinite superposition learn optimally \emphand exploit optimally. We refer to the new resulting mechanism as the hopeful mechanism (HM) since although it is incentive compatible, buyers can deviate with extreme consequences for the seller at very little cost to themselves. For real-world settings we propose a robust version of the approach which takes the form of a Markov decision process where the actions are functions. We provide approximate policies motivated by the best of sampled set (BOSS) algorithm coupled with approximate Bayesian inference. Numerical studies show that the proposed method significantly increases seller revenue compared to classical price experimentation, even for the single-item case. [pdf]}
}
%0 Conference Paper
%T Optimal and Robust Price Experimentation: Learning by Lottery
%A Christopher Dance
%A Onno Zoeter
%B Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics
%C Proceedings of Machine Learning Research
%D 2011
%E Geoffrey Gordon
%E David Dunson
%E Miroslav Dudík
%F pmlr-v15-dance11a
%I PMLR
%J Proceedings of Machine Learning Research
%P 242--250
%U http://proceedings.mlr.press
%V 15
%W PMLR
%X This paper studies optimal price learning for one or more items. We introduce the Schr\""odinger price experiment (SPE) which superimposes classical price experiments using lotteries, and thereby extracts more information from each customer interaction. If buyers are perfectly rational we show that there exist SPEs that in the limit of infinite superposition learn optimally \emphand exploit optimally. We refer to the new resulting mechanism as the hopeful mechanism (HM) since although it is incentive compatible, buyers can deviate with extreme consequences for the seller at very little cost to themselves. For real-world settings we propose a robust version of the approach which takes the form of a Markov decision process where the actions are functions. We provide approximate policies motivated by the best of sampled set (BOSS) algorithm coupled with approximate Bayesian inference. Numerical studies show that the proposed method significantly increases seller revenue compared to classical price experimentation, even for the single-item case. [pdf]
TY - CPAPER
TI - Optimal and Robust Price Experimentation: Learning by Lottery
AU - Christopher Dance
AU - Onno Zoeter
BT - Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics
PY - 2011/06/14
DA - 2011/06/14
ED - Geoffrey Gordon
ED - David Dunson
ED - Miroslav Dudík
ID - pmlr-v15-dance11a
PB - PMLR
SP - 242
DP - PMLR
EP - 250
L1 - http://proceedings.mlr.press/v15/dance11a/dance11a.pdf
UR - http://proceedings.mlr.press/v15/dance11a.html
AB - This paper studies optimal price learning for one or more items. We introduce the Schr\""odinger price experiment (SPE) which superimposes classical price experiments using lotteries, and thereby extracts more information from each customer interaction. If buyers are perfectly rational we show that there exist SPEs that in the limit of infinite superposition learn optimally \emphand exploit optimally. We refer to the new resulting mechanism as the hopeful mechanism (HM) since although it is incentive compatible, buyers can deviate with extreme consequences for the seller at very little cost to themselves. For real-world settings we propose a robust version of the approach which takes the form of a Markov decision process where the actions are functions. We provide approximate policies motivated by the best of sampled set (BOSS) algorithm coupled with approximate Bayesian inference. Numerical studies show that the proposed method significantly increases seller revenue compared to classical price experimentation, even for the single-item case. [pdf]
ER -
Dance, C. & Zoeter, O.. (2011). Optimal and Robust Price Experimentation: Learning by Lottery. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in PMLR 15:242-250
This site last compiled Sat, 14 Jul 2018 22:22:39 +0000