An agent that remembers everything you have ever told it is not the dream — it is a creep. The right memory layer forgets on purpose, in ways that match how users expect a relationship to age. Here are the five forgetting strategies and how to combine them.
Why forgetting is a UX problem
Three failure modes when memory never shrinks:
- Stale facts win — last year's address overrides this week's.
- Embarrassing recall — the agent surfaces a long-ago throwaway comment.
- Privacy creep — the user no longer remembers what the agent knows about them.
All three lose users faster than feature gaps.
The five strategies
1. Time decay
Older memories get a multiplicative penalty in retrieval scoring. Old enough memories drop below the retrieval floor.
- Implementation:
score = relevance * exp(-age_days / half_life). - Tune: half-life of 60–180 days for general facts, 7–14 days for context.
- Pick when: general-purpose agents.
2. Relevance pruning
Memories that have not been retrieved in N sessions get archived (not deleted), then deleted after another N.
- Implementation: track last-retrieved-at; nightly job archives the cold tail.
- Pick when: memory size matters more than perfect recall.
3. Contradiction collapse
When the user contradicts a stored fact, the old fact does not vanish — it gets a valid_to timestamp, and the new fact takes priority.
- Implementation: see semantic memory.
- Pick when: facts that change over time (address, role, preferences).
4. Consent-driven forgetting
The user can ask "forget what I said about X" or "forget the last hour". The agent honours.
- Implementation: a tool the agent exposes to the user; cascading delete by topic or session.
- Pick when: consumer products; trust matters.
5. Importance-weighted retention
On write, score the memory's likely future utility. High scores stay; low scores have a short TTL.
- Implementation: a small classifier on memory content;
ttl = base_ttl * importance_score. - Pick when: chat-heavy agents with lots of throwaway content.
Combining strategies
Most production memory systems use three:
- Time decay as the default.
- Importance-weighted at write time.
- Consent-driven as the user override.
Contradiction collapse is mandatory for any memory layer that stores facts at all. Relevance pruning is the cleanup pass.
The retention policy
A sample policy worth writing down:
General memory: time-decay, half-life 90 days, hard delete after 2 years.
Conversational throwaway: importance-weighted, TTL 7-30 days.
Decisions and commitments: indefinite, but contradiction-collapsed.
PII: consent-driven only, no auto-decay (user must keep or delete).
Audit log: separate retention; not subject to forgetting.
This becomes part of your privacy policy and your DPIA. See GDPR-compliant agents.
The user-visible memory page
The single highest-leverage UX decision: a page where the user sees what the agent remembers, organised by recency and topic, with delete buttons.
What I remember about you:
• You prefer concise answers (added 30 days ago)
• You manage 3 dogs (added 4 months ago)
• You travel monthly to Berlin (last referenced today)
[Forget all] [Forget by topic] [Export]
Visible memory is a feature; invisible memory is a liability.
What does NOT count as forgetting
- Soft deletion that retrieval ignores but storage keeps — fails GDPR.
- Just deleting from one index — must cascade across vector, semantic, episodic.
- "Pretend to forget" — model still has the context; user notices.
True forgetting is hard delete from every layer the memory touched.
Common mistakes
- No decay at all — memory grows linearly, retrieval quality degrades.
- Too aggressive decay — agent feels amnesiac; users complain.
- No user control — they ask to forget; you cannot.
- Collapsing contradictions silently — old fact disappears; user wonders if they imagined saying it.
Where this is heading
Two trends by 2027: standardised "memory dashboard" UI primitives that ship in agent SDKs, and regulatory pressure for explicit retention policies on memory. Build the policy and the dashboard now; swap implementations later.