[edit]
Agent-Memory Protocol: A Privacy-Focused Protocol for LLM Agents and User Memory Interaction
Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, PMLR 317:293-301, 2026.
Abstract
Large-Language Model (LLM) based agents are rapidly migrating from stateless chat systems to persistent, longitudinal assistants. Nowhere is this transition more sensitive than in fields like medicine and finance, where context accumulates across months of interactions containing personally identifiable or confidential information. Existing memory and retrieval architectures of agents focus on efficiency or reasoning quality but remain indifferent to privacy: prompts, retrieval results, and cached traces leak explicit identifiers to third-party infrastructure. We introduce the Agent-Memory Protocol (AMP), a privacy-first protocol for LLM Agents and User Memory Interaction. AMP enforces confidentiality at the boundary where language meets computation. It defines three deterministic operations—redact at rest, pack for purpose, and hydrate on return, that together guarantee that no personal identifier ever leaves the user boundary while maintaining the reasoning utility of long-term memory. We formalize its protocol and illustrate its operation in multi-turn use-cases spanning medicine and finance, including radiology follow-up, discharge planning, and portfolio management.