[edit]
Adaptive Exploration for Multi-Reward Multi-Policy Evaluation
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:52382-52421, 2025.
Abstract
We study the policy evaluation problem in an online multi-reward multi-policy discounted setting, where multiple reward functions must be evaluated simultaneously for different policies. We adopt an $(\epsilon,\delta)$-PAC perspective to achieve $\epsilon$-accurate estimates with high confidence over finite or convex sets of rewards, a setting that has not been systematically studied in the literature. Building on prior work on Multi-Reward Best Policy Identification, we adapt the MR-NaS exploration scheme to jointly minimize sample complexity for evaluating different policies across different reward sets. Our approach leverages an instance-specific lower bound revealing how the sample complexity scales with a measure of value deviation, guiding the design of an efficient exploration policy. Although computing this bound entails a hard non-convex optimization, we propose an efficient convex approximation that holds for both finite and convex reward sets. Experiments in tabular domains demonstrate the effectiveness of this adaptive exploration scheme. Code repository: https://github.com/rssalessio/multi-reward-multi-policy-eval.