The Lock-in Hypothesis: Stagnation by Algorithm

Tianyi Qiu, Zhonghao He, Tejasveer Chugh, Max Kleiman-Weiner
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:50526-50571, 2025.

Abstract

The training and deployment of large language models (LLMs) create a feedback loop with human users: models learn human beliefs from data, reinforce these beliefs with generated content, reabsorb the reinforced beliefs, and feed them back to users again and again. This dynamic resembles an echo chamber. We hypothesize that this feedback loop entrenches the existing values and beliefs of users, leading to a loss of diversity in human ideas and potentially the lock-in of false beliefs. We formalize this hypothesis and test it empirically with agent-based LLM simulations and real-world GPT usage data. Analysis reveals sudden but sustained drops in diversity after the release of new GPT iterations, consistent with the hypothesized human-AI feedback loop. Website: https://thelockinhypothesis.com

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-qiu25d, title = {The Lock-in Hypothesis: Stagnation by Algorithm}, author = {Qiu, Tianyi and He, Zhonghao and Chugh, Tejasveer and Kleiman-Weiner, Max}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {50526--50571}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/qiu25d/qiu25d.pdf}, url = {https://proceedings.mlr.press/v267/qiu25d.html}, abstract = {The training and deployment of large language models (LLMs) create a feedback loop with human users: models learn human beliefs from data, reinforce these beliefs with generated content, reabsorb the reinforced beliefs, and feed them back to users again and again. This dynamic resembles an echo chamber. We hypothesize that this feedback loop entrenches the existing values and beliefs of users, leading to a loss of diversity in human ideas and potentially the lock-in of false beliefs. We formalize this hypothesis and test it empirically with agent-based LLM simulations and real-world GPT usage data. Analysis reveals sudden but sustained drops in diversity after the release of new GPT iterations, consistent with the hypothesized human-AI feedback loop. Website: https://thelockinhypothesis.com} }
Endnote
%0 Conference Paper %T The Lock-in Hypothesis: Stagnation by Algorithm %A Tianyi Qiu %A Zhonghao He %A Tejasveer Chugh %A Max Kleiman-Weiner %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-qiu25d %I PMLR %P 50526--50571 %U https://proceedings.mlr.press/v267/qiu25d.html %V 267 %X The training and deployment of large language models (LLMs) create a feedback loop with human users: models learn human beliefs from data, reinforce these beliefs with generated content, reabsorb the reinforced beliefs, and feed them back to users again and again. This dynamic resembles an echo chamber. We hypothesize that this feedback loop entrenches the existing values and beliefs of users, leading to a loss of diversity in human ideas and potentially the lock-in of false beliefs. We formalize this hypothesis and test it empirically with agent-based LLM simulations and real-world GPT usage data. Analysis reveals sudden but sustained drops in diversity after the release of new GPT iterations, consistent with the hypothesized human-AI feedback loop. Website: https://thelockinhypothesis.com
APA
Qiu, T., He, Z., Chugh, T. & Kleiman-Weiner, M.. (2025). The Lock-in Hypothesis: Stagnation by Algorithm. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:50526-50571 Available from https://proceedings.mlr.press/v267/qiu25d.html.

Related Material