[edit]
Cultural Representation Bias and Alignment Divergence in Large Language Models
Proceedings of AAAI 2026 Workshop on Bias in Multimodal AI, PMLR 332:11-14, 2026.
Abstract
Large Language Models (LLMs) are increasingly deployed as globally applicable tools, yet their internal mechanisms remain deeply conditioned by regional cultural schemas. Through a three-stage cultural audit comparing Western and Chinese LLMs, we identify a systematic divergence in how these models prioritize core social values. Quantitative results reveal a stark contrast: Western models consistently prioritize individualistic constructs like "Autonomy", while Chinese models favor relational ethics such as "Harmony". We attribute this divergence to a two-stage “cultural imprinting” process during large-scale pre-training and subsequent human-feedback refinement. This cumulative imprinting suggests that aligning AI to a single set of cultural standards may inadvertently impose a restrictive lens on the model, creating a risk where cultural differences are misconstrued as moral or behavioral deficits. Consequently, we advocate for the development of locally-aligned models and multidisciplinary fairness metrics to ensure global representation equity in the era of foundation AI.