Conversation Summarization & Decay
Category: Lifecycle
Problem
Raw conversation logs are verbose and repetitive. Storing every message creates noise in recall results and unbounded memory growth. You need a strategy to consolidate conversations into concise summaries while letting irrelevant details fade over time.
Architecture
Combine Dakera's autopilot consolidation with memory decay. The autopilot periodically consolidates related memories into summaries. Decay strategies reduce the relevance score of memories that are not reinforced, eventually removing them from recall results.
Flow
- Store conversation messages with low-to-medium importance
- Store key decisions and outcomes with high importance
- Configure decay to fade raw messages over time
- Autopilot consolidates related messages into summaries
- Summaries inherit higher importance and persist long-term
Implementation
from dakera import Dakera
client = Dakera(base_url="http://localhost:3300", api_key="dk-...")
# Store raw conversation exchanges with low importance (will decay)
client.memory.store(
content="User asked about pricing tiers for the enterprise plan",
namespace="support-user-123",
metadata={"type": "conversation", "importance": 0.3}
)
client.memory.store(
content="Explained that enterprise starts at $499/mo with custom SLA",
namespace="support-user-123",
metadata={"type": "conversation", "importance": 0.3}
)
# Store the decision/outcome with high importance (persists)
client.memory.store(
content="User chose enterprise plan at $499/mo. Contract signed for 12 months.",
namespace="support-user-123",
metadata={"type": "decision", "importance": 0.95}
)
# Configure exponential decay for this namespace
# via REST API:
# POST /v1/decay/config
# {
# "namespace": "support-user-123",
# "strategy": "exponential",
# "half_life_days": 30,
# "min_importance": 0.1
# }
# Memories with importance below min_importance after decay are removed from recall.
# High-importance memories (decisions) decay much slower.
Decay Strategies
# Available decay strategies:
# 1. exponential — memories decay by factor e^(-lambda*t)
# 2. linear — constant rate decrease over time
# 3. logarithmic — fast initial decay, slows over time
# 4. step — importance drops at fixed intervals
# 5. periodic — cyclical decay (useful for seasonal patterns)
# 6. custom — user-defined decay function
# Example: configure logarithmic decay
curl -X POST http://localhost:3300/v1/decay/config \
-H "Authorization: Bearer dk-..." \
-H "Content-Type: application/json" \
-d '{
"namespace": "support-user-123",
"strategy": "logarithmic",
"params": {
"base_rate": 0.1,
"min_importance": 0.05
}
}'
When to Use This Pattern
- Customer support systems with high conversation volume
- Long-running projects where daily details become irrelevant
- Any system where memory growth needs to be bounded
- Applications mimicking human memory (remember outcomes, forget specifics)
Key Considerations
- Always store decisions and outcomes with high importance — they should outlive conversation details
- Choose a decay strategy that matches your use case: exponential for most applications, logarithmic for cases where recent context is critical
- Autopilot consolidation runs asynchronously and does not block store/recall operations
- Memories recalled frequently get their importance reinforced, resisting decay