The problem
AI coding assistants start fresh every session. They don't remember your project conventions, past debugging sessions, or architecture decisions. You end up re-explaining the same context every time you open a new chat.
"We use PostgreSQL, not MongoDB." "The auth service lives in /services/auth." "That SSL cert issue was caused by an expired CA bundle — we fixed it by updating certifi." Every developer has repeated these kinds of statements to their AI assistant dozens of times. The assistant doesn't retain any of it.
Some tools offer markdown-based memory files (CLAUDE.md, .cursorrules), but these are manually maintained, don't scale, and can't capture the nuanced context that emerges from real development sessions — debugging findings, architecture trade-off discussions, or learned patterns about your specific codebase.
How Dakera solves it
Dakera runs as an MCP (Model Context Protocol) server that plugs directly into Claude Code, Cursor, and Windsurf. Once connected, your AI coding tool gains 83 memory tools — store, recall, search, session management, knowledge graph traversal, and more.
- MCP server with 83 tools — plugs directly into Claude Code, Cursor, and Windsurf via stdio transport. No custom integration code needed.
- Stores conversations, decisions, and learned patterns as persistent memories. Architecture decisions, debugging resolutions, code review feedback, project conventions — all stored with semantic embeddings and importance scores.
- Semantic recall surfaces relevant context without explicit re-prompting. When you ask about the auth service, Dakera automatically recalls past decisions, debugging sessions, and conventions related to auth — without you having to reference them.
- Session management groups related coding tasks. A multi-hour debugging session is grouped under one session. The findings are available in future sessions, scoped and organized.
Setup
Add Dakera to your MCP configuration. For Claude Code, add this to your .mcp.json:
{
"mcpServers": {
"dakera": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "DAKERA_API_URL=http://localhost:3300",
"-e", "DAKERA_API_KEY=dk-...",
"-e", "DAKERA_AGENT_ID=claude-code",
"ghcr.io/dakera-ai/dakera-mcp:latest"
]
}
}
}
That's it. Once the MCP server is running, your AI coding tool can store and recall memories automatically. The Dakera server must be running on port 3300 (or wherever you configured it). Authentication uses the DAKERA_API_KEY environment variable.
What your AI assistant remembers
Once configured, Dakera gives your coding assistant persistent memory across three categories:
Architecture decisions
Your assistant remembers why you chose specific technologies, patterns, and approaches. "We chose PostgreSQL over MongoDB for this project because of ACID compliance needs." "The API uses JWT tokens with 15-minute expiry and refresh tokens stored in HttpOnly cookies." These decisions are stored with high importance and persist across sessions.
Debugging history
Past debugging sessions become searchable knowledge. "The SSL cert issue was caused by expired CA bundle — fixed by updating certifi." "The intermittent 502 on the API gateway was caused by connection pool exhaustion — fixed by setting max_connections=100." When a similar issue comes up, the assistant recalls the resolution without you having to search through old chat logs.
Project conventions
Coding standards, naming conventions, file organization patterns, testing approaches — all stored and recalled automatically. "We use snake_case for Python, camelCase for TypeScript." "Integration tests go in /tests/integration/ and require a running database." "Always use structured logging with the structlog library."
83 MCP tools: Beyond basic store/recall, the MCP server exposes tools for knowledge graph traversal, session management, hybrid search, entity extraction, batch operations, namespace management, and more. Your AI assistant can use all of them through natural language.
Works with your existing workflow
Dakera's MCP server works alongside your existing tools, not instead of them. Your CLAUDE.md files, .cursorrules, and other configuration files continue to work. Dakera adds a dynamic memory layer on top — one that grows with your project rather than requiring manual maintenance.
The MCP server runs as a Docker container with zero dependencies beyond the Dakera server itself. Built-in embeddings (bge-large, 1024-dim) mean no external API calls — your code context never leaves your infrastructure.
Deploy persistent memory for your agents
Self-hosted, no external API dependencies, production-ready. Give your AI coding tools persistent memory in under 5 minutes.