LangChain.js Integration
Persistent semantic memory and server-side vector search for LangChain.js. Full TypeScript types — no local embedding model required.
Package:
@dakera-ai/langchain · GitHub →| Class | Description |
|---|---|
DakeraMemory | Drop-in BaseMemory for LangChain.js conversation chains |
DakeraVectorStore | VectorStore backed by Dakera's server-side embedding engine |
Quick Start
1
Run Dakera
docker run -d \
--name dakera \
-p 3300:3300 \
-e DAKERA_ROOT_API_KEY=dk-mykey \
ghcr.io/dakera-ai/dakera:latest
curl http://localhost:3300/health # → {"status":"ok"}
2
Install
npm install @dakera-ai/langchain @dakera-ai/dakera @langchain/core
Requirements: Node.js ≥ 20, a running Dakera server.
3
Use it
import { DakeraMemory } from "@dakera-ai/langchain";
import { ConversationChain } from "langchain/chains";
import { ChatOpenAI } from "@langchain/openai";
const memory = new DakeraMemory({
apiUrl: "http://localhost:3300",
apiKey: "dk-mykey",
agentId: "my-agent",
});
const chain = new ConversationChain({
llm: new ChatOpenAI({ model: "gpt-4o" }),
memory,
});
// Memory persists across sessions and restarts
const response = await chain.call({ input: "My project is called NeuralBridge." });
console.log(response.response);
DakeraMemory
Persistent conversation memory for LangChain.js chains. Stores and recalls conversation history using Dakera's hybrid search (BM25 + vector).
import { DakeraMemory } from "@dakera-ai/langchain";
import { ConversationChain } from "langchain/chains";
import { ChatOpenAI } from "@langchain/openai";
const memory = new DakeraMemory({
apiUrl: "http://localhost:3300",
apiKey: process.env.DAKERA_API_KEY!,
agentId: "my-agent",
recallK: 5, // how many past memories to surface per turn
importance: 0.7, // importance score for stored memories
});
const chain = new ConversationChain({
llm: new ChatOpenAI({ model: "gpt-4o" }),
memory,
});
// First session
await chain.call({ input: "My name is Alice and I'm building a chatbot." });
// Later session — memory persists across restarts
const { response } = await chain.call({ input: "What was I building?" });
console.log(response); // "You mentioned you were building a chatbot."
DakeraMemory options
| Option | Type | Default | Description |
|---|---|---|---|
apiUrl | string | — | Dakera server URL (e.g. http://localhost:3300) |
apiKey | string | "" | Dakera API key |
agentId | string | — | Agent identifier for memory namespacing |
recallK | number | 5 | How many past memories to surface per turn |
importance | number | 0.7 | Importance score for stored memories |
minImportance | number | 0.0 | Minimum importance threshold for recall |
DakeraVectorStore
Server-side embedded vector store for RAG. Compatible with VectorStore from @langchain/core. Dakera handles all embeddings — no OpenAI embeddings API needed.
import { DakeraVectorStore } from "@dakera-ai/langchain";
const vectorStore = new DakeraVectorStore({
apiUrl: "http://localhost:3300",
apiKey: process.env.DAKERA_API_KEY!,
namespace: "my-docs",
});
// Index documents (server handles embedding)
await vectorStore.addDocuments([
{ pageContent: "Dakera is a self-hosted memory server.", metadata: {} },
{ pageContent: "It scores 87.6% on the LoCoMo benchmark.", metadata: {} },
]);
// Similarity search
const results = await vectorStore.similaritySearch("benchmark score", 3);
console.log(results);
RAG chain with retrieval
import { DakeraVectorStore } from "@dakera-ai/langchain";
import { RetrievalQAChain } from "langchain/chains";
import { ChatOpenAI } from "@langchain/openai";
const vectorStore = new DakeraVectorStore({
apiUrl: "http://localhost:3300",
apiKey: process.env.DAKERA_API_KEY!,
namespace: "product-docs",
});
const chain = RetrievalQAChain.fromLLM(
new ChatOpenAI({ model: "gpt-4o" }),
vectorStore.asRetriever({ k: 4 }),
);
const { text } = await chain.call({ query: "How does memory decay work?" });
console.log(text);
DakeraVectorStore options
| Option | Type | Default | Description |
|---|---|---|---|
apiUrl | string | — | Dakera server URL |
apiKey | string | "" | Dakera API key |
namespace | string | — | Vector namespace to read/write |
embeddingModel | string | namespace default | Server-side embedding model override |