Planning in Reward-Rich Domains via PAC Bandits

[edit]

Sergiu Goschin, Ari Weinstein, Michael L. Littman, Erick Chastain ;
Proceedings of the Tenth European Workshop on Reinforcement Learning, PMLR 24:25-42, 2013.

Abstract

In some decision-making environments, successful solutions are common. If the evaluation of candidate solutions is noisy, however, the challenge is knowing when a “good enough” answer has been found. We formalize this problem as an infinite-armed bandit and provide upper and lower bounds on the number of evaluations or “pulls” needed to identify a solution whose evaluation exceeds a given threshold r0 . We present several algorithms and use them to identify reliable strategies for solving screens from the video games \emphInfinite Mario and \emphPitfall! We show order of magnitude improvements in sample complexity over a natural approach that pulls each arm until a good estimate of its success probability is known.

Related Material