The problem
Multi-agent frameworks like CrewAI, AutoGen, and LangGraph orchestrate teams of specialized agents — a researcher, a writer, a reviewer, a planner. Each agent has a defined role and communicates with the others through message passing. The frameworks handle the orchestration well. What they don't handle is memory.
Every time a crew runs, each agent starts with a blank slate. The research agent discovers that a competitor launched new pricing — but on the next run, it discovers the same thing again. The planning agent proposes a strategy that was already tried and rejected last week. The writing agent drafts content without knowing what the research agent found in a previous session.
The result is redundant work, inconsistent outputs, and an inability to accumulate knowledge over time. Agent teams can't learn from their own history.
How Dakera solves it
Dakera provides a shared memory infrastructure that multi-agent teams can read from and write to, with isolation where needed and sharing where useful.
- Each agent gets its own agent_id for personal context. The research agent's findings are stored under its agent_id. The writing agent's drafts are stored under its own. Each agent has a clean namespace for its own work.
- Shared namespaces let agents read each other's memories. A writing agent can query the researcher's memories to find relevant findings. Namespaces are configured per-deployment — you control which agents can read from which namespaces.
- Sessions group related work. A single research sprint — multiple searches, multiple findings — is grouped under one session. When the writing agent recalls research, it can scope the query to a specific session or search across all sessions.
- Cross-agent recall: agent B can query memories from agent A's namespace. The recall API accepts any agent_id, not just the caller's. A reviewer agent can pull the writer's draft and the researcher's sources in a single query.
- Knowledge graph connects entities discovered by different agents. The research agent finds "Competitor X" in a pricing analysis. The writing agent references "Competitor X" in a blog draft. Dakera's entity extraction links both memories through the shared entity, enabling graph traversal across agent boundaries.
Implementation
Here's a typical CrewAI-style pattern where multiple agents share memory through Dakera:
from dakera import DakeraClient
client = DakeraClient(base_url="http://localhost:3300", api_key="dk-...")
# Research agent stores a finding
client.store(
agent_id="researcher",
content="Found that competitor X launched a new pricing tier at $49/mo targeting SMBs. Source: TechCrunch article from May 2026.",
importance=0.9,
tags=["research", "competitor-analysis", "pricing"]
)
# Writing agent recalls research findings
memories = client.recall(
agent_id="researcher", # query another agent's memories
query="competitor pricing changes for SMB market",
top_k=10
)
# Writing agent stores its own output
client.store(
agent_id="writer",
content="Draft blog post comparing our pricing to competitor X. Key angle: we offer self-hosted at zero recurring cost vs their $49/mo SaaS.",
importance=0.7,
tags=["writing", "draft", "pricing-comparison"]
)
Session-scoped collaboration
When agents collaborate on a specific task, sessions keep the context tight:
# Start a session for this research sprint
client.session_start(agent_id="researcher")
# All memories stored in this session are grouped
client.store(
agent_id="researcher",
content="SMB market analysis: 3 competitors increased pricing in Q1 2026.",
importance=0.85,
tags=["research", "market-analysis"]
)
# End the session when research is complete
client.session_end(session_id=session_id)
Multi-agent isolation: Each agent_id is a first-class namespace in Dakera. Agents can only write to their own namespace, but can read from any namespace they have access to. This prevents accidental overwrites while enabling full cross-agent recall.
Why this matters at scale
For one-off agent runs, memory doesn't matter much. But production multi-agent systems run repeatedly — daily research workflows, weekly content generation, continuous monitoring. Without persistent memory, every run is independent. With Dakera, each run builds on the last.
A research agent that ran last Tuesday found three relevant articles. This Tuesday, it doesn't need to find them again — they're already in memory. It can focus on finding new information and updating what changed. The cumulative effect is that agent teams get more efficient over time, not just more capable per-run.
Deploy persistent memory for your agents
Self-hosted, no external API dependencies, production-ready. Add shared memory to your agent team in under 10 minutes.