COMPARE

Dakera vs Letta (formerly MemGPT)

Letta (MemGPT) and Dakera approach agent memory from fundamentally different angles. Letta uses LLMs to actively manage memory — the model decides what to remember and forget. Dakera is a dedicated retrieval engine with deterministic memory operations. One uses AI to manage memory; the other is infrastructure that AI agents call into.

Feature Comparison

FeatureDakeraLetta (MemGPT)
CategoryMemory Retrieval EngineAgent Framework with Memory
Memory ManagementDeterministic (algorithmic decay, scoring)LLM-powered (model decides what to store/forget)
LanguageRust (single binary)Python
RetrievalHybrid HNSW + BM25, RRF, cross-encoderLLM-directed search over archival memory
Memory TiersFlat store with decay + importance scoringCore memory (system prompt) + archival (vector) + recall (conversation)
Context ManagementNot applicable (stores/retrieves memories)Virtual context management (OS-like paging)
Agent FrameworkNo (memory infrastructure only)Yes (full agent with tools, personas, memory)
LLM DependencyNone for core ops (embeddings are local ONNX)Requires LLM for all memory operations
Knowledge GraphEntity extraction (GLiNER), BFSNot built-in
Memory Decay6 configurable strategiesLLM-decided (non-deterministic)
MCP Tools83 toolsNot available (own tool system)
SDKsPython, TypeScript, Go, RustPython
Cost per Query~0 (local inference only)LLM API cost per memory operation
LicenseMIT SDKs, proprietary serverApache 2.0

Architecture Differences

Dakera

A memory storage and retrieval engine. Your agent stores memories via API, and retrieves them via hybrid search with reranking. Dakera does not make decisions about what to remember — your agent does. Memory decay is algorithmic and deterministic: configurable strategies (time-based, access-count, importance scoring) manage memory lifecycle predictably. The engine runs entirely on local inference (ONNX) with no LLM calls.

Letta (MemGPT)

An agent framework that treats memory like an operating system. Inspired by virtual memory in OS design, Letta uses an LLM to actively manage what goes into "core memory" (the system prompt), "archival memory" (long-term vector storage), and "recall memory" (recent conversation history). The LLM decides when to page information in and out of context. This creates a self-managing memory system, but every memory operation costs an LLM API call and is inherently non-deterministic.

Deployment Model

AspectDakeraLetta
SetupDocker pull + run (single binary)pip install + LLM API key
Runtime DependenciesNone (self-contained ONNX)LLM API (OpenAI, Anthropic, etc.)
Latency~5-50ms per query (local inference)~500-2000ms per memory op (LLM round-trip)
Cost ModelFixed (your infra only)Variable (LLM tokens per operation)
DeterminismDeterministic (same query = same results)Non-deterministic (LLM may vary)
ScaleHandles millions of memories per namespaceLimited by LLM context and API throughput

Pricing Comparison

AspectDakeraLetta
SoftwareFree (self-hosted)Free (Apache 2.0)
Per Memory Operation~$0 (local ONNX inference)~$0.001-0.01 (LLM API call per operation)
1M Memory Ops/month~$10-30 (VPS cost only)~$1,000-10,000 (LLM API costs)
Cloud/EnterpriseComing soonLetta Cloud (managed platform)

The cost difference is significant at scale. Every memory operation in Letta requires an LLM inference call, while Dakera's operations use only local ONNX models (embedding + reranking). For high-volume agent memory workloads, this difference is orders of magnitude.

When to Choose

Choose Letta if:

Choose Dakera if:

Verdict

Dakera provides deterministic memory infrastructure — hybrid BM25 + HNSW vector search with cross-encoder reranking at 5-50ms latency, 6 memory decay strategies, knowledge graphs, and 83 MCP tools, all in a self-hosted 44 MB Rust binary scoring 87.6% on LoCoMo with zero LLM API costs for memory operations. Letta takes an innovative approach where the LLM itself manages memory, enabling creative and context-aware memory decisions that adapt dynamically to conversation flow — genuinely powerful for use cases that benefit from reasoning about what to remember. Choose Dakera when you need fast, deterministic, cost-effective memory retrieval as infrastructure for your agent stack. Choose Letta when you want the LLM to actively reason about memory management and can accommodate the additional API costs and latency.

Try Dakera Free

Deterministic memory retrieval at 5-50ms latency. No LLM API costs for memory operations.

Get Started