Towards Robust and Scalable Knowledge Editing in Text-to-Image Diffusion Models

YiFei Liu, Xin Wang
Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304:990-1005, 2025.

Abstract

Knowledge editing in Text-to-Image(T2I) diffusion models aims to update specific factual associations without disrupting unrelated knowledge. However, existing methods often suffer from unintended collateral effects, where editing a single fact can alter the representation of non-target named entities, degrading generation quality for unrelated prompts, which becomes more severe in real-world, dynamic environments requiring frequent updates. To address this challenge, we introduce a novel editing framework supporting large-scale T2I knowledge editing. Our framework incorporates our proposed Entity-Aware Text Alignment(EATA) to penalize unintended changes in unaffected entities and employs a principled null-space projection strategy to minimize perturbations to existing knowledge. Experimental results demonstrate that our approach enables precise and robust large-scale T2I knowledge editing, preserves the integrity of unrelated content, and maintains high generation fidelity, while offering scalability for continuous editing scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v304-liu25d, title = {Towards Robust and Scalable Knowledge Editing in Text-to-Image Diffusion Models}, author = {Liu, YiFei and Wang, Xin}, booktitle = {Proceedings of the 17th Asian Conference on Machine Learning}, pages = {990--1005}, year = {2025}, editor = {Lee, Hung-yi and Liu, Tongliang}, volume = {304}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v304/main/assets/liu25d/liu25d.pdf}, url = {https://proceedings.mlr.press/v304/liu25d.html}, abstract = {Knowledge editing in Text-to-Image(T2I) diffusion models aims to update specific factual associations without disrupting unrelated knowledge. However, existing methods often suffer from unintended collateral effects, where editing a single fact can alter the representation of non-target named entities, degrading generation quality for unrelated prompts, which becomes more severe in real-world, dynamic environments requiring frequent updates. To address this challenge, we introduce a novel editing framework supporting large-scale T2I knowledge editing. Our framework incorporates our proposed Entity-Aware Text Alignment(EATA) to penalize unintended changes in unaffected entities and employs a principled null-space projection strategy to minimize perturbations to existing knowledge. Experimental results demonstrate that our approach enables precise and robust large-scale T2I knowledge editing, preserves the integrity of unrelated content, and maintains high generation fidelity, while offering scalability for continuous editing scenarios.} }
Endnote
%0 Conference Paper %T Towards Robust and Scalable Knowledge Editing in Text-to-Image Diffusion Models %A YiFei Liu %A Xin Wang %B Proceedings of the 17th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Hung-yi Lee %E Tongliang Liu %F pmlr-v304-liu25d %I PMLR %P 990--1005 %U https://proceedings.mlr.press/v304/liu25d.html %V 304 %X Knowledge editing in Text-to-Image(T2I) diffusion models aims to update specific factual associations without disrupting unrelated knowledge. However, existing methods often suffer from unintended collateral effects, where editing a single fact can alter the representation of non-target named entities, degrading generation quality for unrelated prompts, which becomes more severe in real-world, dynamic environments requiring frequent updates. To address this challenge, we introduce a novel editing framework supporting large-scale T2I knowledge editing. Our framework incorporates our proposed Entity-Aware Text Alignment(EATA) to penalize unintended changes in unaffected entities and employs a principled null-space projection strategy to minimize perturbations to existing knowledge. Experimental results demonstrate that our approach enables precise and robust large-scale T2I knowledge editing, preserves the integrity of unrelated content, and maintains high generation fidelity, while offering scalability for continuous editing scenarios.
APA
Liu, Y. & Wang, X.. (2025). Towards Robust and Scalable Knowledge Editing in Text-to-Image Diffusion Models. Proceedings of the 17th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 304:990-1005 Available from https://proceedings.mlr.press/v304/liu25d.html.

Related Material