Value Alignment Verification

Daniel S Brown, Jordan Schneider, Anca Dragan, Scott Niekum
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1105-1115, 2021.

Abstract

As humans interact with autonomous agents to perform increasingly complicated, potentially risky tasks, it is important to be able to efficiently evaluate an agent’s performance and correctness. In this paper we formalize and theoretically analyze the problem of efficient value alignment verification: how to efficiently test whether the behavior of another agent is aligned with a human’s values? The goal is to construct a kind of "driver’s test" that a human can give to any agent which will verify value alignment via a minimal number of queries. We study alignment verification problems with both idealized humans that have an explicit reward function as well as problems where they have implicit values. We analyze verification of exact value alignment for rational agents, propose and test heuristics for value alignment verification in gridworlds and a continuous autonomous driving domain, and prove that there exist sufficient conditions such that we can verify epsilon-alignment in any environment via a constant-query-complexity alignment test.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-brown21a, title = {Value Alignment Verification}, author = {Brown, Daniel S and Schneider, Jordan and Dragan, Anca and Niekum, Scott}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1105--1115}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/brown21a/brown21a.pdf}, url = {https://proceedings.mlr.press/v139/brown21a.html}, abstract = {As humans interact with autonomous agents to perform increasingly complicated, potentially risky tasks, it is important to be able to efficiently evaluate an agent’s performance and correctness. In this paper we formalize and theoretically analyze the problem of efficient value alignment verification: how to efficiently test whether the behavior of another agent is aligned with a human’s values? The goal is to construct a kind of "driver’s test" that a human can give to any agent which will verify value alignment via a minimal number of queries. We study alignment verification problems with both idealized humans that have an explicit reward function as well as problems where they have implicit values. We analyze verification of exact value alignment for rational agents, propose and test heuristics for value alignment verification in gridworlds and a continuous autonomous driving domain, and prove that there exist sufficient conditions such that we can verify epsilon-alignment in any environment via a constant-query-complexity alignment test.} }
Endnote
%0 Conference Paper %T Value Alignment Verification %A Daniel S Brown %A Jordan Schneider %A Anca Dragan %A Scott Niekum %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-brown21a %I PMLR %P 1105--1115 %U https://proceedings.mlr.press/v139/brown21a.html %V 139 %X As humans interact with autonomous agents to perform increasingly complicated, potentially risky tasks, it is important to be able to efficiently evaluate an agent’s performance and correctness. In this paper we formalize and theoretically analyze the problem of efficient value alignment verification: how to efficiently test whether the behavior of another agent is aligned with a human’s values? The goal is to construct a kind of "driver’s test" that a human can give to any agent which will verify value alignment via a minimal number of queries. We study alignment verification problems with both idealized humans that have an explicit reward function as well as problems where they have implicit values. We analyze verification of exact value alignment for rational agents, propose and test heuristics for value alignment verification in gridworlds and a continuous autonomous driving domain, and prove that there exist sufficient conditions such that we can verify epsilon-alignment in any environment via a constant-query-complexity alignment test.
APA
Brown, D.S., Schneider, J., Dragan, A. & Niekum, S.. (2021). Value Alignment Verification. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1105-1115 Available from https://proceedings.mlr.press/v139/brown21a.html.

Related Material