Balancing Knowledge Updates: Toward Unified Modular Editing in LLMs

Jiahao Liu, Zijian Wang, Zhao Kuo, Dong Hu
Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304:335-350, 2025.

Abstract

Knowledge editing has emerged as an efficient approach for updating factual knowledge in large language models (LLMs), typically achieved by first locating key knowledge-storage modules and then modifying their parameters. However, most existing methods focus exclusively on updating the weights of Multi-Layer Perceptron (MLP) modules, which are commonly identified as the primary repositories of factual information. Other important components, such as attention (Attn) modules—one of the core modules in LLMs—are often ignored during editing. This biased allocation of updates can leave residual outdated knowledge in the model and limit the effectiveness of knowledge editing. In this paper, we conduct comprehensive and systematic knowledge localization experiments on advanced LLMs, revealing that Attn modules play a substantial role in factual knowledge storage and retrieval, especially in earlier layers. Building on these insights, we propose \\textit\{IntAttn-Edit\}, a novel method that extends the associative memory paradigm to jointly update both MLP and Attn modules. Our approach employs a knowledge balancing strategy that proportionally allocates update magnitudes based on each module’s measured contribution to knowledge storage. Extensive experiments on popular benchmarks demonstrate that \\textit\{IntAttn-Edit\} consistently achieves superior results over existing methods, delivering higher edit success, improved generalization, and robust knowledge preservation. Further empirical analysis shows that our knowledge balancing strategy enables the editing performance to remain within the optimal range across different settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v304-liu25b, title = {Balancing Knowledge Updates: Toward Unified Modular Editing in LLMs}, author = {Liu, Jiahao and Wang, Zijian and Kuo, Zhao and Hu, Dong}, booktitle = {Proceedings of the 17th Asian Conference on Machine Learning}, pages = {335--350}, year = {2025}, editor = {Lee, Hung-yi and Liu, Tongliang}, volume = {304}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v304/main/assets/liu25b/liu25b.pdf}, url = {https://proceedings.mlr.press/v304/liu25b.html}, abstract = {Knowledge editing has emerged as an efficient approach for updating factual knowledge in large language models (LLMs), typically achieved by first locating key knowledge-storage modules and then modifying their parameters. However, most existing methods focus exclusively on updating the weights of Multi-Layer Perceptron (MLP) modules, which are commonly identified as the primary repositories of factual information. Other important components, such as attention (Attn) modules—one of the core modules in LLMs—are often ignored during editing. This biased allocation of updates can leave residual outdated knowledge in the model and limit the effectiveness of knowledge editing. In this paper, we conduct comprehensive and systematic knowledge localization experiments on advanced LLMs, revealing that Attn modules play a substantial role in factual knowledge storage and retrieval, especially in earlier layers. Building on these insights, we propose \\textit\{IntAttn-Edit\}, a novel method that extends the associative memory paradigm to jointly update both MLP and Attn modules. Our approach employs a knowledge balancing strategy that proportionally allocates update magnitudes based on each module’s measured contribution to knowledge storage. Extensive experiments on popular benchmarks demonstrate that \\textit\{IntAttn-Edit\} consistently achieves superior results over existing methods, delivering higher edit success, improved generalization, and robust knowledge preservation. Further empirical analysis shows that our knowledge balancing strategy enables the editing performance to remain within the optimal range across different settings.} }
Endnote
%0 Conference Paper %T Balancing Knowledge Updates: Toward Unified Modular Editing in LLMs %A Jiahao Liu %A Zijian Wang %A Zhao Kuo %A Dong Hu %B Proceedings of the 17th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Hung-yi Lee %E Tongliang Liu %F pmlr-v304-liu25b %I PMLR %P 335--350 %U https://proceedings.mlr.press/v304/liu25b.html %V 304 %X Knowledge editing has emerged as an efficient approach for updating factual knowledge in large language models (LLMs), typically achieved by first locating key knowledge-storage modules and then modifying their parameters. However, most existing methods focus exclusively on updating the weights of Multi-Layer Perceptron (MLP) modules, which are commonly identified as the primary repositories of factual information. Other important components, such as attention (Attn) modules—one of the core modules in LLMs—are often ignored during editing. This biased allocation of updates can leave residual outdated knowledge in the model and limit the effectiveness of knowledge editing. In this paper, we conduct comprehensive and systematic knowledge localization experiments on advanced LLMs, revealing that Attn modules play a substantial role in factual knowledge storage and retrieval, especially in earlier layers. Building on these insights, we propose \\textit\{IntAttn-Edit\}, a novel method that extends the associative memory paradigm to jointly update both MLP and Attn modules. Our approach employs a knowledge balancing strategy that proportionally allocates update magnitudes based on each module’s measured contribution to knowledge storage. Extensive experiments on popular benchmarks demonstrate that \\textit\{IntAttn-Edit\} consistently achieves superior results over existing methods, delivering higher edit success, improved generalization, and robust knowledge preservation. Further empirical analysis shows that our knowledge balancing strategy enables the editing performance to remain within the optimal range across different settings.
APA
Liu, J., Wang, Z., Kuo, Z. & Hu, D.. (2025). Balancing Knowledge Updates: Toward Unified Modular Editing in LLMs. Proceedings of the 17th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 304:335-350 Available from https://proceedings.mlr.press/v304/liu25b.html.

Related Material