Agent-Memory Protocol: A Privacy-Focused Protocol for LLM Agents and User Memory Interaction

Junde Wu, Minhao Hu, Jiayuan Zhu, Jiaye Wang, Yueming Jin
Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, PMLR 317:293-301, 2026.

Abstract

Large-Language Model (LLM) based agents are rapidly migrating from stateless chat systems to persistent, longitudinal assistants. Nowhere is this transition more sensitive than in fields like medicine and finance, where context accumulates across months of interactions containing personally identifiable or confidential information. Existing memory and retrieval architectures of agents focus on efficiency or reasoning quality but remain indifferent to privacy: prompts, retrieval results, and cached traces leak explicit identifiers to third-party infrastructure. We introduce the Agent-Memory Protocol (AMP), a privacy-first protocol for LLM Agents and User Memory Interaction. AMP enforces confidentiality at the boundary where language meets computation. It defines three deterministic operations—redact at rest, pack for purpose, and hydrate on return, that together guarantee that no personal identifier ever leaves the user boundary while maintaining the reasoning utility of long-term memory. We formalize its protocol and illustrate its operation in multi-turn use-cases spanning medicine and finance, including radiology follow-up, discharge planning, and portfolio management.

Cite this Paper


BibTeX
@InProceedings{pmlr-v317-wu26a, title = {Agent-Memory Protocol: A Privacy-Focused Protocol for LLM Agents and User Memory Interaction}, author = {Wu, Junde and Hu, Minhao and Zhu, Jiayuan and Wang, Jiaye and Jin, Yueming}, booktitle = {Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare}, pages = {293--301}, year = {2026}, editor = {Wu, Junde and Pan, Jiazhen and Zhu, Jiayuan and Luo, Luyang and Li, Yitong and Xu, Min and Jin, Yueming and Rueckert, Daniel}, volume = {317}, series = {Proceedings of Machine Learning Research}, month = {20--21 Jan}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v317/main/assets/wu26a/wu26a.pdf}, url = {https://proceedings.mlr.press/v317/wu26a.html}, abstract = {Large-Language Model (LLM) based agents are rapidly migrating from stateless chat systems to persistent, longitudinal assistants. Nowhere is this transition more sensitive than in fields like medicine and finance, where context accumulates across months of interactions containing personally identifiable or confidential information. Existing memory and retrieval architectures of agents focus on efficiency or reasoning quality but remain indifferent to privacy: prompts, retrieval results, and cached traces leak explicit identifiers to third-party infrastructure. We introduce the Agent-Memory Protocol (AMP), a privacy-first protocol for LLM Agents and User Memory Interaction. AMP enforces confidentiality at the boundary where language meets computation. It defines three deterministic operations—redact at rest, pack for purpose, and hydrate on return, that together guarantee that no personal identifier ever leaves the user boundary while maintaining the reasoning utility of long-term memory. We formalize its protocol and illustrate its operation in multi-turn use-cases spanning medicine and finance, including radiology follow-up, discharge planning, and portfolio management.} }
Endnote
%0 Conference Paper %T Agent-Memory Protocol: A Privacy-Focused Protocol for LLM Agents and User Memory Interaction %A Junde Wu %A Minhao Hu %A Jiayuan Zhu %A Jiaye Wang %A Yueming Jin %B Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare %C Proceedings of Machine Learning Research %D 2026 %E Junde Wu %E Jiazhen Pan %E Jiayuan Zhu %E Luyang Luo %E Yitong Li %E Min Xu %E Yueming Jin %E Daniel Rueckert %F pmlr-v317-wu26a %I PMLR %P 293--301 %U https://proceedings.mlr.press/v317/wu26a.html %V 317 %X Large-Language Model (LLM) based agents are rapidly migrating from stateless chat systems to persistent, longitudinal assistants. Nowhere is this transition more sensitive than in fields like medicine and finance, where context accumulates across months of interactions containing personally identifiable or confidential information. Existing memory and retrieval architectures of agents focus on efficiency or reasoning quality but remain indifferent to privacy: prompts, retrieval results, and cached traces leak explicit identifiers to third-party infrastructure. We introduce the Agent-Memory Protocol (AMP), a privacy-first protocol for LLM Agents and User Memory Interaction. AMP enforces confidentiality at the boundary where language meets computation. It defines three deterministic operations—redact at rest, pack for purpose, and hydrate on return, that together guarantee that no personal identifier ever leaves the user boundary while maintaining the reasoning utility of long-term memory. We formalize its protocol and illustrate its operation in multi-turn use-cases spanning medicine and finance, including radiology follow-up, discharge planning, and portfolio management.
APA
Wu, J., Hu, M., Zhu, J., Wang, J. & Jin, Y.. (2026). Agent-Memory Protocol: A Privacy-Focused Protocol for LLM Agents and User Memory Interaction. Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, in Proceedings of Machine Learning Research 317:293-301 Available from https://proceedings.mlr.press/v317/wu26a.html.

Related Material