Agent Memory Is Not Just Vectors
Vector databases give you similarity search. Dakera gives you hybrid retrieval, knowledge graphs, memory decay, sessions, and temporal reasoning — everything an AI agent actually needs to remember.
What Vector Databases Miss
Vector databases were built for similarity search over static document embeddings. Agent memory has fundamentally different requirements:
| Requirement | Vector DB | Dakera |
|---|---|---|
| Semantic search | Yes | Yes (HNSW) |
| Keyword/exact match | No | Yes (BM25) |
| Result reranking | No | Yes (cross-encoder) |
| Temporal reasoning | No | Yes (time-aware recall) |
| Knowledge graphs | No | Yes (GLiNER + BFS) |
| Memory decay | No | Yes (6 strategies) |
| Session management | No | Yes (start/end/handoff) |
| Deduplication | No | Yes (semantic dedup) |
| Multi-agent isolation | Manual | Yes (namespaces + scoped keys) |
| On-device embeddings | Rarely | Yes (ONNX: MiniLM, BGE, E5) |
Hybrid Retrieval: Why It Matters
Pure vector search fails on exact names, dates, and technical terms. Pure keyword search fails on paraphrased concepts. Dakera combines both:
- HNSW vector search — captures semantic meaning ("the user's preferred IDE" matches "they like VS Code")
- BM25 full-text search — catches exact terms ("meeting on 2026-03-15" matches "March 15th meeting")
- Reciprocal Rank Fusion — merges results from both retrieval paths
- Cross-encoder reranking — final precision pass using a trained reranker model
This pipeline achieves 87.6% on the LoCoMo benchmark (50 sessions, 1540 questions) — measuring real conversational memory recall including temporal, multi-hop, and preference questions.
Code Example: Hybrid Search
from dakera import Dakera
client = Dakera(base_url="http://localhost:3300", api_key="dk-...")
# Store memories over time
client.memory.store(content="User switched from PyCharm to Cursor in January 2026")
client.memory.store(content="User prefers dark mode in all editors")
client.memory.store(content="User's project uses Python 3.12 with FastAPI")
# Hybrid recall combines semantic + keyword matching
results = client.memory.recall(
query="What IDE does the user currently use?",
top_k=5
)
# Returns: "User switched from PyCharm to Cursor in January 2026"
# Vector search alone might return all editor-related memories without temporal ranking
Beyond Retrieval: Agent Memory Features
Knowledge Graphs
Dakera automatically extracts entities using GLiNER and links them with typed edges. Traverse relationships with BFS to answer multi-hop questions like "Who works at the company that sponsors the user's open source project?"
Memory Decay
Not all memories should live forever. Configure 6 decay strategies (exponential, linear, logarithmic, step, periodic, custom) to let old, unreinforced memories fade — mimicking how human memory works.
Sessions and Namespaces
Isolate memories by conversation session, user namespace, or agent identity. Transfer context between sessions with session handoff.
Frequently Asked Questions
Vector databases only do similarity search. Agent memory requires temporal reasoning, relationship tracking, importance scoring, memory decay, session management, and deduplication. Dakera provides all of these on top of hybrid retrieval (vectors + full-text + reranking).
Dakera combines HNSW vector search with BM25 full-text search using Reciprocal Rank Fusion (RRF), then applies cross-encoder reranking for precision. This catches both semantic matches and exact keyword matches that pure vector search misses.
Yes. Dakera's memory import API accepts bulk data. You can export your vectors from any database and import them into Dakera, gaining knowledge graphs, decay, sessions, and hybrid retrieval on top of your existing data.
Dakera scores 87.6% on the LoCoMo benchmark (50 sessions, 1540 questions), which tests long-context conversational memory recall including temporal, multi-hop, and preference questions.
Replace Your Vector Database with Real Agent Memory
Hybrid retrieval, knowledge graphs, decay, sessions — everything agents need.
Get Started