Larimar: Large Language Models with Episodic Memory Control

Payel Das, Subhajit Chaudhury, Elliot Nelson, Igor Melnyk, Sarathkrishna Swaminathan, Sihui Dai, Aurelie Lozano, Georgios Kollias, Vijil Chenthamarakshan, Jiri Navratil, Soham Dan, Pin-Yu Chen
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:10109-10126, 2024.

Abstract

Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar’s memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed—yielding speed-ups of 8-10x depending on the base LLM —as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting, information leakage prevention, and input context length generalization with Larimar and show their effectiveness. Our code is available at https://github.com/IBM/larimar.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-das24a, title = {Larimar: Large Language Models with Episodic Memory Control}, author = {Das, Payel and Chaudhury, Subhajit and Nelson, Elliot and Melnyk, Igor and Swaminathan, Sarathkrishna and Dai, Sihui and Lozano, Aurelie and Kollias, Georgios and Chenthamarakshan, Vijil and Navratil, Jiri and Dan, Soham and Chen, Pin-Yu}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {10109--10126}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/das24a/das24a.pdf}, url = {https://proceedings.mlr.press/v235/das24a.html}, abstract = {Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar’s memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed—yielding speed-ups of 8-10x depending on the base LLM —as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting, information leakage prevention, and input context length generalization with Larimar and show their effectiveness. Our code is available at https://github.com/IBM/larimar.} }
Endnote
%0 Conference Paper %T Larimar: Large Language Models with Episodic Memory Control %A Payel Das %A Subhajit Chaudhury %A Elliot Nelson %A Igor Melnyk %A Sarathkrishna Swaminathan %A Sihui Dai %A Aurelie Lozano %A Georgios Kollias %A Vijil Chenthamarakshan %A Jiri Navratil %A Soham Dan %A Pin-Yu Chen %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-das24a %I PMLR %P 10109--10126 %U https://proceedings.mlr.press/v235/das24a.html %V 235 %X Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar’s memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed—yielding speed-ups of 8-10x depending on the base LLM —as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting, information leakage prevention, and input context length generalization with Larimar and show their effectiveness. Our code is available at https://github.com/IBM/larimar.
APA
Das, P., Chaudhury, S., Nelson, E., Melnyk, I., Swaminathan, S., Dai, S., Lozano, A., Kollias, G., Chenthamarakshan, V., Navratil, J., Dan, S. & Chen, P.. (2024). Larimar: Large Language Models with Episodic Memory Control. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:10109-10126 Available from https://proceedings.mlr.press/v235/das24a.html.

Related Material