LangChain Persistent Memory with Dakera

Replace ephemeral ConversationBufferMemory with production-grade persistent memory. Hybrid retrieval, knowledge graphs, and cross-session recall for your LangChain agents.

The Problem with LangChain's Built-in Memory

LangChain's memory classes (ConversationBufferMemory, ConversationSummaryMemory, VectorStoreRetrieverMemory) have fundamental limitations:

How It Works

Install the SDK

pip install dakera langchain

Store Memories After Each Interaction

After your LangChain chain produces a response, persist the exchange to Dakera.

Recall Context Before Generating

Before invoking the LLM, query Dakera for relevant memories and inject them as context.

Code Example: LangChain + Dakera

from dakera import Dakera
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# Initialize Dakera
memory = Dakera(base_url="http://localhost:3300", api_key="dk-...")
llm = ChatOpenAI(model="gpt-4o")

def chat_with_memory(user_input: str, user_id: str) -> str:
    # 1. Recall relevant memories
    memories = memory.memory.recall(
        query=user_input,
        namespace=user_id,
        top_k=5
    )

    context = "\n".join([m["content"] for m in memories["results"]])

    # 2. Build prompt with memory context
    prompt = ChatPromptTemplate.from_messages([
        ("system", "You are a helpful assistant. Use this memory context:\n{context}"),
        ("human", "{input}")
    ])

    chain = prompt | llm | StrOutputParser()
    response = chain.invoke({"context": context, "input": user_input})

    # 3. Store the interaction as a new memory
    memory.memory.store(
        content=f"User said: {user_input}\nAssistant responded: {response}",
        namespace=user_id,
        metadata={"source": "langchain", "type": "conversation"}
    )

    return response

What You Gain Over Built-in Memory

FeatureLangChain Built-inLangChain + Dakera
PersistenceIn-process onlyDisk-backed, survives restarts
Cross-sessionNoYes, across all sessions
RetrievalBuffer or basic vectorHybrid (HNSW + BM25 + reranking)
Knowledge graphsNoGLiNER entity extraction + BFS
Memory decayNo6 configurable strategies
Multi-userManualNamespaces with scoped API keys
EncryptionNoAES-256-GCM at rest
BenchmarkN/A87.6% LoCoMo

Works with LangGraph Too

from langgraph.graph import StateGraph
from dakera import Dakera

memory = Dakera(base_url="http://localhost:3300", api_key="dk-...")

def recall_node(state):
    """Inject relevant memories into agent state."""
    memories = memory.memory.recall(
        query=state["input"],
        namespace=state["user_id"],
        top_k=5
    )
    state["context"] = memories["results"]
    return state

def store_node(state):
    """Persist the agent's output as a memory."""
    memory.memory.store(
        content=state["output"],
        namespace=state["user_id"],
        metadata={"agent": state["agent_name"]}
    )
    return state

# Add recall_node before your LLM node, store_node after
graph = StateGraph(...)
graph.add_node("recall", recall_node)
graph.add_node("store", store_node)

Frequently Asked Questions

How do I add persistent memory to LangChain? +

Install the Dakera Python SDK (pip install dakera), then use it as a memory backend in your LangChain chain. Store memories after each interaction and recall relevant context before generating responses. Dakera persists memories across sessions with hybrid retrieval.

What's wrong with LangChain's built-in memory? +

LangChain's built-in memory (ConversationBufferMemory, ConversationSummaryMemory) is ephemeral — it lives in process memory and is lost when the process restarts. It also lacks semantic search, temporal reasoning, and multi-session support. Dakera provides all of these with persistence.

Does Dakera work with LangChain Expression Language (LCEL)? +

Yes. Dakera's Python SDK works alongside any LangChain chain structure. You call client.memory.recall() to inject context and client.memory.store() to persist new memories — it integrates into LCEL chains as a standard function call.

Can I use Dakera with LangGraph? +

Yes. Dakera works as a persistent memory backend for LangGraph agents. Each node in your graph can store and recall memories via the Dakera SDK, providing cross-session continuity for multi-step agent workflows.

Give Your LangChain Agents Persistent Memory

pip install dakera — cross-session recall in under 5 minutes.

Get Started