How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?

Ryan Liu, Theodore Sumers, Ishita Dasgupta, Thomas L. Griffiths
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:31844-31865, 2024.

Abstract

In day-to-day communication, people often approximate the truth — for example, rounding the time or omitting details — in order to be maximally helpful to the listener. How do large language models (LLMs) handle such nuanced trade-offs? To address this question, we use psychological models and experiments designed to characterize human behavior to analyze LLMs. We test a range of LLMs and explore how optimization for human preferences or inference-time reasoning affects these trade-offs. We find that reinforcement learning from human feedback improves both honesty and helpfulness, while chain-of-thought prompting skews LLMs towards helpfulness over honesty. Finally, GPT-4 Turbo demonstrates human-like response patterns including sensitivity to the conversational framing and listener’s decision context. Our findings reveal the conversational values internalized by LLMs and suggest that even these abstract values can, to a degree, be steered by zero-shot prompting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-liu24bb, title = {How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?}, author = {Liu, Ryan and Sumers, Theodore and Dasgupta, Ishita and Griffiths, Thomas L.}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {31844--31865}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bb/liu24bb.pdf}, url = {https://proceedings.mlr.press/v235/liu24bb.html}, abstract = {In day-to-day communication, people often approximate the truth — for example, rounding the time or omitting details — in order to be maximally helpful to the listener. How do large language models (LLMs) handle such nuanced trade-offs? To address this question, we use psychological models and experiments designed to characterize human behavior to analyze LLMs. We test a range of LLMs and explore how optimization for human preferences or inference-time reasoning affects these trade-offs. We find that reinforcement learning from human feedback improves both honesty and helpfulness, while chain-of-thought prompting skews LLMs towards helpfulness over honesty. Finally, GPT-4 Turbo demonstrates human-like response patterns including sensitivity to the conversational framing and listener’s decision context. Our findings reveal the conversational values internalized by LLMs and suggest that even these abstract values can, to a degree, be steered by zero-shot prompting.} }
Endnote
%0 Conference Paper %T How do Large Language Models Navigate Conflicts between Honesty and Helpfulness? %A Ryan Liu %A Theodore Sumers %A Ishita Dasgupta %A Thomas L. Griffiths %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-liu24bb %I PMLR %P 31844--31865 %U https://proceedings.mlr.press/v235/liu24bb.html %V 235 %X In day-to-day communication, people often approximate the truth — for example, rounding the time or omitting details — in order to be maximally helpful to the listener. How do large language models (LLMs) handle such nuanced trade-offs? To address this question, we use psychological models and experiments designed to characterize human behavior to analyze LLMs. We test a range of LLMs and explore how optimization for human preferences or inference-time reasoning affects these trade-offs. We find that reinforcement learning from human feedback improves both honesty and helpfulness, while chain-of-thought prompting skews LLMs towards helpfulness over honesty. Finally, GPT-4 Turbo demonstrates human-like response patterns including sensitivity to the conversational framing and listener’s decision context. Our findings reveal the conversational values internalized by LLMs and suggest that even these abstract values can, to a degree, be steered by zero-shot prompting.
APA
Liu, R., Sumers, T., Dasgupta, I. & Griffiths, T.L.. (2024). How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:31844-31865 Available from https://proceedings.mlr.press/v235/liu24bb.html.

Related Material