Agent Memory Is Not Just Vectors

Vector databases give you similarity search. Dakera gives you hybrid retrieval, knowledge graphs, memory decay, sessions, and temporal reasoning — everything an AI agent actually needs to remember.

What Vector Databases Miss

Vector databases were built for similarity search over static document embeddings. Agent memory has fundamentally different requirements:

RequirementVector DBDakera
Semantic searchYesYes (HNSW)
Keyword/exact matchNoYes (BM25)
Result rerankingNoYes (cross-encoder)
Temporal reasoningNoYes (time-aware recall)
Knowledge graphsNoYes (GLiNER + BFS)
Memory decayNoYes (6 strategies)
Session managementNoYes (start/end/handoff)
DeduplicationNoYes (semantic dedup)
Multi-agent isolationManualYes (namespaces + scoped keys)
On-device embeddingsRarelyYes (ONNX: MiniLM, BGE, E5)

Hybrid Retrieval: Why It Matters

Pure vector search fails on exact names, dates, and technical terms. Pure keyword search fails on paraphrased concepts. Dakera combines both:

This pipeline achieves 87.6% on the LoCoMo benchmark (50 sessions, 1540 questions) — measuring real conversational memory recall including temporal, multi-hop, and preference questions.

Code Example: Hybrid Search

from dakera import Dakera

client = Dakera(base_url="http://localhost:3300", api_key="dk-...")

# Store memories over time
client.memory.store(content="User switched from PyCharm to Cursor in January 2026")
client.memory.store(content="User prefers dark mode in all editors")
client.memory.store(content="User's project uses Python 3.12 with FastAPI")

# Hybrid recall combines semantic + keyword matching
results = client.memory.recall(
    query="What IDE does the user currently use?",
    top_k=5
)
# Returns: "User switched from PyCharm to Cursor in January 2026"
# Vector search alone might return all editor-related memories without temporal ranking

Beyond Retrieval: Agent Memory Features

Knowledge Graphs

Dakera automatically extracts entities using GLiNER and links them with typed edges. Traverse relationships with BFS to answer multi-hop questions like "Who works at the company that sponsors the user's open source project?"

Memory Decay

Not all memories should live forever. Configure 6 decay strategies (exponential, linear, logarithmic, step, periodic, custom) to let old, unreinforced memories fade — mimicking how human memory works.

Sessions and Namespaces

Isolate memories by conversation session, user namespace, or agent identity. Transfer context between sessions with session handoff.

Frequently Asked Questions

Why is a vector database not enough for agent memory? +

Vector databases only do similarity search. Agent memory requires temporal reasoning, relationship tracking, importance scoring, memory decay, session management, and deduplication. Dakera provides all of these on top of hybrid retrieval (vectors + full-text + reranking).

How does Dakera's hybrid retrieval work? +

Dakera combines HNSW vector search with BM25 full-text search using Reciprocal Rank Fusion (RRF), then applies cross-encoder reranking for precision. This catches both semantic matches and exact keyword matches that pure vector search misses.

Can I migrate from Pinecone, Weaviate, or ChromaDB? +

Yes. Dakera's memory import API accepts bulk data. You can export your vectors from any database and import them into Dakera, gaining knowledge graphs, decay, sessions, and hybrid retrieval on top of your existing data.

What benchmark score does Dakera achieve? +

Dakera scores 87.6% on the LoCoMo benchmark (50 sessions, 1540 questions), which tests long-context conversational memory recall including temporal, multi-hop, and preference questions.

Replace Your Vector Database with Real Agent Memory

Hybrid retrieval, knowledge graphs, decay, sessions — everything agents need.

Get Started