10 min read

Dakera MCP Memory Server: Setup Guide for Claude, Cursor, and Windsurf

By default, every Claude session, every Cursor chat, and every Windsurf task starts fresh. The assistant has no memory of what you told it yesterday, which libraries you prefer, or the context of your ongoing project. This is the agent amnesia problem.

Dakera solves it with a self-hosted memory server that connects to your AI tools via the Model Context Protocol (MCP). Once configured, your assistant can store what it learns and recall it in future sessions — without any code changes to your workflow.

This guide walks you through the complete setup: server installation, MCP binary, and client configuration for Claude Desktop, Claude Code, Cursor, and Windsurf.

In this guide

  1. Prerequisites
  2. Install the Dakera server
  3. Install the MCP binary
  4. Configure your AI client
  5. Test the setup
  6. What to store in memory
  7. Production setup with Docker Compose

Prerequisites

What you'll build

A self-hosted memory server running on localhost:3300 that stores memories persistently to disk and exposes them to your AI clients via 83 MCP tools. All data stays on your machine.


Step 1: Install the Dakera Server

Start the server with Docker

The simplest install runs Dakera as a single container. For production, see the full Docker Compose setup below.

docker run -d \
  --name dakera \
  --restart unless-stopped \
  -p 3300:3300 \
  -v dakera-data:/data \
  -e DAKERA_PORT=3300 \
  -e DAKERA_ROOT_API_KEY=my-dev-key \
  ghcr.io/dakera-ai/dakera:latest

This starts Dakera on port 3300 with persistent storage in a named Docker volume.

Verify it's running

curl http://localhost:3300/health
# {"service":"dakera","status":"healthy","version":"0.11.54"}

You should see "status":"healthy". If you see a connection error, check that Docker is running and port 3300 isn't in use.


Step 2: Install the MCP Binary

The dakera-mcp binary is a small process that bridges your AI client to the Dakera server. It speaks the Model Context Protocol and exposes all 83 Dakera tools to your assistant.

Download and install dakera-mcp

Choose your platform:

Linux (x64):

curl -fsSL https://github.com/dakera-ai/dakera-mcp/releases/latest/download/dakera-mcp-linux-x64.tar.gz \
  | tar -xz -C /usr/local/bin
chmod +x /usr/local/bin/dakera-mcp

macOS (Apple Silicon):

curl -fsSL https://github.com/dakera-ai/dakera-mcp/releases/latest/download/dakera-mcp-darwin-arm64.tar.gz \
  | tar -xz -C /usr/local/bin
chmod +x /usr/local/bin/dakera-mcp

macOS (Intel):

curl -fsSL https://github.com/dakera-ai/dakera-mcp/releases/latest/download/dakera-mcp-darwin-x64.tar.gz \
  | tar -xz -C /usr/local/bin
chmod +x /usr/local/bin/dakera-mcp

Windows (PowerShell):

Invoke-WebRequest -Uri "https://github.com/dakera-ai/dakera-mcp/releases/latest/download/dakera-mcp-windows-x64.zip" -OutFile "dakera-mcp.zip"
Expand-Archive dakera-mcp.zip -DestinationPath "$env:LOCALAPPDATA\dakera-mcp"
# Add $env:LOCALAPPDATA\dakera-mcp to your PATH

Verify the binary

dakera-mcp --version
# dakera-mcp 0.9.8

Step 3: Configure Your AI Client

Choose your client below:

Claude Desktop

Open your Claude Desktop config file:

Add the Dakera MCP server:

{
  "mcpServers": {
    "dakera": {
      "command": "dakera-mcp",
      "env": {
        "DAKERA_API_URL": "http://localhost:3300",
        "DAKERA_API_KEY": "my-dev-key"
      }
    }
  }
}

Restart Claude Desktop. You should see "dakera" appear in the MCP servers list.

Claude Code (CLI)

In your project directory, create or edit .mcp.json:

{
  "mcpServers": {
    "dakera": {
      "command": "dakera-mcp",
      "env": {
        "DAKERA_API_URL": "http://localhost:3300",
        "DAKERA_API_KEY": "my-dev-key"
      }
    }
  }
}

Or add globally in ~/.claude/claude.json to have memory across all projects.

Cursor

Open Cursor Settings → MCP Servers → Add new server:

{
  "dakera": {
    "command": "dakera-mcp",
    "env": {
      "DAKERA_API_URL": "http://localhost:3300",
      "DAKERA_API_KEY": "my-dev-key"
    }
  }
}

Windsurf

Create ~/.windsurf/mcp_config.json:

{
  "servers": {
    "dakera": {
      "command": "dakera-mcp",
      "env": {
        "DAKERA_API_URL": "http://localhost:3300",
        "DAKERA_API_KEY": "my-dev-key"
      }
    }
  }
}

Step 4: Test the Setup

Store a test memory

In your AI client, type:

Store a memory that I prefer TypeScript over JavaScript for new projects,
importance 0.8, for agent "me".

The assistant should call dakera_store and confirm the memory was saved.

Verify recall works

In a new session, ask:

What programming language do I prefer?

The assistant should call dakera_recall and return the memory you stored. This confirms the full round-trip works: store → persist → recall across sessions.

Check the API directly (optional)

curl -s -X POST http://localhost:3300/v1/recall \
  -H "Authorization: Bearer my-dev-key" \
  -H "Content-Type: application/json" \
  -d '{"agent_id":"me","query":"programming language preference","top_k":3}' \
  | jq '.memories[].content'

What to Store in Memory

Once the plumbing is working, the question becomes: what should you actually tell your assistant to remember? Here are patterns that work well:

Developer preferences (importance: 0.7–0.8)

"Remember that I prefer functional React components over class components,
use Zod for validation, and always write JSDoc for public functions."

Project context (importance: 0.8–0.9)

"Remember that this project uses PostgreSQL 16 with Drizzle ORM,
the API is deployed on Railway, and the frontend is at vercel.com/my-project."

Decisions and their rationale (importance: 0.9)

"Remember that we chose Tailwind over CSS Modules because the team is
distributed and Tailwind's utility classes make async code reviews easier."

Instructions (importance: 0.8)

"Remember: never use console.log in this codebase — use the Logger utility
at src/lib/logger.ts instead."
Pro tip

Tell your assistant: "At the start of each session, recall my top 5 memories for this project." This creates a consistent warm-up ritual that surfaces relevant context before you've even asked a question.


Production Setup with Docker Compose

For teams or production use, the single-container setup lacks persistent object storage for large memory volumes. The recommended stack uses Dakera with MinIO:

git clone https://github.com/dakera-ai/dakera-deploy
cd dakera-deploy
cp .env.example .env

Edit .env:

DAKERA_ROOT_API_KEY=change-me-to-a-strong-random-key
MINIO_ROOT_USER=minio-admin
MINIO_ROOT_PASSWORD=change-me-too

Start the full stack:

docker compose up -d

This brings up:

Update your MCP config to use your server's IP or hostname instead of localhost if running remotely:

{
  "mcpServers": {
    "dakera": {
      "command": "dakera-mcp",
      "env": {
        "DAKERA_API_URL": "http://your-server-ip:3300",
        "DAKERA_API_KEY": "your-api-key"
      }
    }
  }
}

The 83 MCP Tools

Once connected, your assistant has access to Dakera's full API surface through MCP tools. The most commonly used:

The assistant can discover all 83 tools via MCP's tool-listing capability. You don't need to specify which tools to use — the assistant calls them based on your request.


Troubleshooting

The MCP server doesn't appear in my client

Restart your AI client after editing the config. The MCP server list is read at startup. Verify the path to dakera-mcp is correct with which dakera-mcp.

Recall returns no results

Confirm memories were stored: curl http://localhost:3300/health should show healthy status. Try recalling with a broader query or lower top_k.

The server exits immediately

Check Docker logs: docker logs dakera. Common causes: port 3300 already in use, or the Docker volume has permission issues.

Memories don't persist after restart

The single-container setup uses a named Docker volume (dakera-data) for local persistence. Without MinIO, very large memory stores may be truncated. Use the full Docker Compose stack for production.


Next Steps