[edit]
Reliable Cultural Knowledge Preservation in Multilingual LLMs through Model Merging
Reliable and Trustworthy Artificial Intelligence 2025, PMLR 310:59-66, 2025.
Abstract
We introduce a reliable approach for enhancing multilingual language models that preserves cultural knowledge while improving reasoning capabilities, focusing on low-resource languages. Using Qwen as a base model, we demonstrate that trust-aware model merging can verifiably improve performance without compromising cultural understanding. Our proposed approach achieves quantifiable improvements in both reasoning tasks and cultural benchmarks while maintaining computational efficiency. Results on Vietnamese and Arabic language tasks show consistent performance gains while preserving cultural knowledge, offering a reliable path for developing trustworthy multilingual AI systems. Our models are available at github.com/WARA-ML/waraml-mini-brains.