Cultural Representation Bias and Alignment Divergence in Large Language Models

Tongtong Kan, Zhen He, Shuofeng Hu, Xiaomin Ying
Proceedings of AAAI 2026 Workshop on Bias in Multimodal AI, PMLR 332:11-14, 2026.

Abstract

Large Language Models (LLMs) are increasingly deployed as globally applicable tools, yet their internal mechanisms remain deeply conditioned by regional cultural schemas. Through a three-stage cultural audit comparing Western and Chinese LLMs, we identify a systematic divergence in how these models prioritize core social values. Quantitative results reveal a stark contrast: Western models consistently prioritize individualistic constructs like "Autonomy", while Chinese models favor relational ethics such as "Harmony". We attribute this divergence to a two-stage “cultural imprinting” process during large-scale pre-training and subsequent human-feedback refinement. This cumulative imprinting suggests that aligning AI to a single set of cultural standards may inadvertently impose a restrictive lens on the model, creating a risk where cultural differences are misconstrued as moral or behavioral deficits. Consequently, we advocate for the development of locally-aligned models and multidisciplinary fairness metrics to ensure global representation equity in the era of foundation AI.

Cite this Paper


BibTeX
@InProceedings{pmlr-v332-kan26a, title = {Cultural Representation Bias and Alignment Divergence in Large Language Models}, author = {Kan, Tongtong and He, Zhen and Hu, Shuofeng and Ying, Xiaomin}, booktitle = {Proceedings of AAAI 2026 Workshop on Bias in Multimodal AI}, pages = {11--14}, year = {2026}, editor = {Han, Soyeon Caren and Cabral, Rina Carines}, volume = {332}, series = {Proceedings of Machine Learning Research}, month = {25 Jan}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v332/main/assets/kan26a/kan26a.pdf}, url = {https://proceedings.mlr.press/v332/kan26a.html}, abstract = {Large Language Models (LLMs) are increasingly deployed as globally applicable tools, yet their internal mechanisms remain deeply conditioned by regional cultural schemas. Through a three-stage cultural audit comparing Western and Chinese LLMs, we identify a systematic divergence in how these models prioritize core social values. Quantitative results reveal a stark contrast: Western models consistently prioritize individualistic constructs like "Autonomy", while Chinese models favor relational ethics such as "Harmony". We attribute this divergence to a two-stage “cultural imprinting” process during large-scale pre-training and subsequent human-feedback refinement. This cumulative imprinting suggests that aligning AI to a single set of cultural standards may inadvertently impose a restrictive lens on the model, creating a risk where cultural differences are misconstrued as moral or behavioral deficits. Consequently, we advocate for the development of locally-aligned models and multidisciplinary fairness metrics to ensure global representation equity in the era of foundation AI.} }
Endnote
%0 Conference Paper %T Cultural Representation Bias and Alignment Divergence in Large Language Models %A Tongtong Kan %A Zhen He %A Shuofeng Hu %A Xiaomin Ying %B Proceedings of AAAI 2026 Workshop on Bias in Multimodal AI %C Proceedings of Machine Learning Research %D 2026 %E Soyeon Caren Han %E Rina Carines Cabral %F pmlr-v332-kan26a %I PMLR %P 11--14 %U https://proceedings.mlr.press/v332/kan26a.html %V 332 %X Large Language Models (LLMs) are increasingly deployed as globally applicable tools, yet their internal mechanisms remain deeply conditioned by regional cultural schemas. Through a three-stage cultural audit comparing Western and Chinese LLMs, we identify a systematic divergence in how these models prioritize core social values. Quantitative results reveal a stark contrast: Western models consistently prioritize individualistic constructs like "Autonomy", while Chinese models favor relational ethics such as "Harmony". We attribute this divergence to a two-stage “cultural imprinting” process during large-scale pre-training and subsequent human-feedback refinement. This cumulative imprinting suggests that aligning AI to a single set of cultural standards may inadvertently impose a restrictive lens on the model, creating a risk where cultural differences are misconstrued as moral or behavioral deficits. Consequently, we advocate for the development of locally-aligned models and multidisciplinary fairness metrics to ensure global representation equity in the era of foundation AI.
APA
Kan, T., He, Z., Hu, S. & Ying, X.. (2026). Cultural Representation Bias and Alignment Divergence in Large Language Models. Proceedings of AAAI 2026 Workshop on Bias in Multimodal AI, in Proceedings of Machine Learning Research 332:11-14 Available from https://proceedings.mlr.press/v332/kan26a.html.

Related Material