[edit]
LLMs’ Pluralistic Compatibility
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:344-350, 2025.
Abstract
Amid growing recognition of the influence of large language models (LLMs) on societies around the world, designers, scholars, and practitioners turn to the development and deployment of value pluralistic models. This extended abstract critically assesses emerging approaches to pluralistic alignment in LLMs. We distinguish between two primary strategies: procedural pluralism, which embeds pluralistic principles into model development processes, and behavioral pluralism, which concerns the values LLMs express in interaction. For each, we examine the underlying normative assumptions and commitments, highlighting tensions between design choices and the demands of pluralism. To meaningfully incorporate pluralism into LLM design, scholars must grapple with its conceptual complexity and contested dimensions. Crucially, this includes clarifying the goals of pluralistic alignment and articulating why pluralism matters for a given application context.