EIKEA:Enhancing In-Context Knowledge Editing by Agents

Zibo Xu, Xin Wang
Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304:1022-1037, 2025.

Abstract

Recent knowledge editing methods have predominantly concentrated on modifying structured triplet knowledge within large language models. Compared to triplet-based knowledge, unstructured knowledge contains richer and more interrelated information, which increases the difficulty of editing. When relying solely on parameter-based editing methods, similar knowledge may interfere with each other due to their semantic overlap. Although previous studies have shown that directly applying in-context editing to unstructured knowledge with better results than parameter-based approaches, there is still considerable room for improvement. Previous studies have found that large language models are highly sensitive to the sequence of long text information,even the core content of the text may be masked due to positional influence. This indicates that, after rewriting unstructured facts, LLMs(Large Language Models) are better able to process and utilize the rewritten facts than the original facts. Inspired by this idea, we propose EIKEA(Enhancing In-Context Knowledge Editing by Agents), a novel method that combines rewriting agent with IKE (In-Context Knowledge Editing), enabling language models to effectively internalize unstructured factual updates without modifying model parameters. We conduct comprehensive experiments on the WIKIUPDATE subset of the AKEW benchmark, demonstrating that our method significantly improves editing accuracy over baseline IKE and parameter-editing methods. Our method provides a practical, lightweight, and scalable solution to unstructured knowledge editing.

Cite this Paper


BibTeX
@InProceedings{pmlr-v304-xu25c, title = {EIKEA:Enhancing In-Context Knowledge Editing by Agents}, author = {Xu, Zibo and Wang, Xin}, booktitle = {Proceedings of the 17th Asian Conference on Machine Learning}, pages = {1022--1037}, year = {2025}, editor = {Lee, Hung-yi and Liu, Tongliang}, volume = {304}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v304/main/assets/xu25c/xu25c.pdf}, url = {https://proceedings.mlr.press/v304/xu25c.html}, abstract = {Recent knowledge editing methods have predominantly concentrated on modifying structured triplet knowledge within large language models. Compared to triplet-based knowledge, unstructured knowledge contains richer and more interrelated information, which increases the difficulty of editing. When relying solely on parameter-based editing methods, similar knowledge may interfere with each other due to their semantic overlap. Although previous studies have shown that directly applying in-context editing to unstructured knowledge with better results than parameter-based approaches, there is still considerable room for improvement. Previous studies have found that large language models are highly sensitive to the sequence of long text information,even the core content of the text may be masked due to positional influence. This indicates that, after rewriting unstructured facts, LLMs(Large Language Models) are better able to process and utilize the rewritten facts than the original facts. Inspired by this idea, we propose EIKEA(Enhancing In-Context Knowledge Editing by Agents), a novel method that combines rewriting agent with IKE (In-Context Knowledge Editing), enabling language models to effectively internalize unstructured factual updates without modifying model parameters. We conduct comprehensive experiments on the WIKIUPDATE subset of the AKEW benchmark, demonstrating that our method significantly improves editing accuracy over baseline IKE and parameter-editing methods. Our method provides a practical, lightweight, and scalable solution to unstructured knowledge editing.} }
Endnote
%0 Conference Paper %T EIKEA:Enhancing In-Context Knowledge Editing by Agents %A Zibo Xu %A Xin Wang %B Proceedings of the 17th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Hung-yi Lee %E Tongliang Liu %F pmlr-v304-xu25c %I PMLR %P 1022--1037 %U https://proceedings.mlr.press/v304/xu25c.html %V 304 %X Recent knowledge editing methods have predominantly concentrated on modifying structured triplet knowledge within large language models. Compared to triplet-based knowledge, unstructured knowledge contains richer and more interrelated information, which increases the difficulty of editing. When relying solely on parameter-based editing methods, similar knowledge may interfere with each other due to their semantic overlap. Although previous studies have shown that directly applying in-context editing to unstructured knowledge with better results than parameter-based approaches, there is still considerable room for improvement. Previous studies have found that large language models are highly sensitive to the sequence of long text information,even the core content of the text may be masked due to positional influence. This indicates that, after rewriting unstructured facts, LLMs(Large Language Models) are better able to process and utilize the rewritten facts than the original facts. Inspired by this idea, we propose EIKEA(Enhancing In-Context Knowledge Editing by Agents), a novel method that combines rewriting agent with IKE (In-Context Knowledge Editing), enabling language models to effectively internalize unstructured factual updates without modifying model parameters. We conduct comprehensive experiments on the WIKIUPDATE subset of the AKEW benchmark, demonstrating that our method significantly improves editing accuracy over baseline IKE and parameter-editing methods. Our method provides a practical, lightweight, and scalable solution to unstructured knowledge editing.
APA
Xu, Z. & Wang, X.. (2025). EIKEA:Enhancing In-Context Knowledge Editing by Agents. Proceedings of the 17th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 304:1022-1037 Available from https://proceedings.mlr.press/v304/xu25c.html.

Related Material