Memorization Sinks: Isolating Memorization during LLM Training

Gaurav Rohit Ghosal, Pratyush Maini, Aditi Raghunathan
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:19307-19326, 2025.

Abstract

Large language models are susceptible to memorizing repeated sequences, posing privacy and copyright concerns. A popular mitigation strategy is to remove memorized information from specific neurons post-hoc. However, such approaches have shown limited success so far. In a controlled setting, we show that the memorization of natural sequences (those that resemble linguistically plausible text) become mechanistically entangled with general language abilities, thereby becoming challenging to remove post-hoc. In this work, we put forward a new paradigm of MemSinks that promotes isolation of memorization by design. We leverage a sequence identifier to activate a unique set of memorization neurons for each sequence across repetitions. By analyzing the dynamics of learning and forgetting, we argue that MemSinks facilitates clean isolation of memorized content, making it easier to remove without compromising general language capabilities. We implement MemSinks at the billion-parameter and billion-token scale, and observe both effective isolation and strong generalization. To our knowledge, this is the first proof-of-concept on real data demonstrating that simultaneous generalization and isolation is achievable. We open-source our code at https://github.com/grghosal/MemSinks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-ghosal25a, title = {Memorization Sinks: Isolating Memorization during {LLM} Training}, author = {Ghosal, Gaurav Rohit and Maini, Pratyush and Raghunathan, Aditi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {19307--19326}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/ghosal25a/ghosal25a.pdf}, url = {https://proceedings.mlr.press/v267/ghosal25a.html}, abstract = {Large language models are susceptible to memorizing repeated sequences, posing privacy and copyright concerns. A popular mitigation strategy is to remove memorized information from specific neurons post-hoc. However, such approaches have shown limited success so far. In a controlled setting, we show that the memorization of natural sequences (those that resemble linguistically plausible text) become mechanistically entangled with general language abilities, thereby becoming challenging to remove post-hoc. In this work, we put forward a new paradigm of MemSinks that promotes isolation of memorization by design. We leverage a sequence identifier to activate a unique set of memorization neurons for each sequence across repetitions. By analyzing the dynamics of learning and forgetting, we argue that MemSinks facilitates clean isolation of memorized content, making it easier to remove without compromising general language capabilities. We implement MemSinks at the billion-parameter and billion-token scale, and observe both effective isolation and strong generalization. To our knowledge, this is the first proof-of-concept on real data demonstrating that simultaneous generalization and isolation is achievable. We open-source our code at https://github.com/grghosal/MemSinks.} }
Endnote
%0 Conference Paper %T Memorization Sinks: Isolating Memorization during LLM Training %A Gaurav Rohit Ghosal %A Pratyush Maini %A Aditi Raghunathan %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-ghosal25a %I PMLR %P 19307--19326 %U https://proceedings.mlr.press/v267/ghosal25a.html %V 267 %X Large language models are susceptible to memorizing repeated sequences, posing privacy and copyright concerns. A popular mitigation strategy is to remove memorized information from specific neurons post-hoc. However, such approaches have shown limited success so far. In a controlled setting, we show that the memorization of natural sequences (those that resemble linguistically plausible text) become mechanistically entangled with general language abilities, thereby becoming challenging to remove post-hoc. In this work, we put forward a new paradigm of MemSinks that promotes isolation of memorization by design. We leverage a sequence identifier to activate a unique set of memorization neurons for each sequence across repetitions. By analyzing the dynamics of learning and forgetting, we argue that MemSinks facilitates clean isolation of memorized content, making it easier to remove without compromising general language capabilities. We implement MemSinks at the billion-parameter and billion-token scale, and observe both effective isolation and strong generalization. To our knowledge, this is the first proof-of-concept on real data demonstrating that simultaneous generalization and isolation is achievable. We open-source our code at https://github.com/grghosal/MemSinks.
APA
Ghosal, G.R., Maini, P. & Raghunathan, A.. (2025). Memorization Sinks: Isolating Memorization during LLM Training. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:19307-19326 Available from https://proceedings.mlr.press/v267/ghosal25a.html.

Related Material