Dakera AI DAKERA AI
Features SDKs Docs Blog GitHub Get Started →
All posts

What's Open in Dakera's Open Core — SDKs, CLI, and MCP Are MIT. The Engine Is Not.

We keep hearing "Dakera is open source" used in ways that aren't accurate. This post is the definitive breakdown: what's MIT-licensed, what's proprietary, and why we made that call.


The phrase "open core" gets used loosely. So does "open source." We want to be precise about what Dakera actually is, because building on memory infrastructure means you need to understand exactly what you're depending on.

The short version: Dakera is open at the edges, closed at the core. The integration layer — SDKs, CLI, MCP server — is MIT-licensed and on GitHub. The memory engine and dashboard are proprietary. Here's what that means in practice.

The accurate breakdown

Open — MIT Licensed
Python SDK — dakera-py
pip install dakera · Apache 2.0 / MIT compatible
TypeScript SDK — dakera-js
npm install dakera
Go SDK — dakera-go
go get github.com/dakera-ai/dakera-go
Rust SDK — dakera-rs
Add dakera to Cargo.toml
CLI — dakera-cli
Shell-scriptable admin and query interface
MCP Server — dakera-mcp
83 tools for Claude, Cursor, Windsurf
Closed — Proprietary
Memory Engine — dakera
The Rust server: HNSW+BM25 retrieval, importance decay, knowledge graphs, AES-256, Raft. Provided as a binary and Docker image. Source is not public.
Dashboard — dakera-dashboard
The web UI for monitoring agents, sessions, and memory health. Proprietary.

What "open at the edges" actually means

Everything a developer needs to integrate Dakera into their application is MIT-licensed. You can take any of the SDKs, fork them, embed them in a commercial product, redistribute them — zero royalties, zero attribution requirements. The CLI and MCP server are the same.

This is a deliberate choice. We want zero friction at the integration layer. If you're building an agent that calls dakera.store(), you should be able to ship that agent with confidence that the dependency you're taking won't change licensing on you or require a call home.

What "closed at the core" means — and why

The memory engine — the Rust server that does the actual retrieval, decay, and graph work — is proprietary. That's where our performance advantage comes from. The hybrid HNSW+BM25 scoring, the temporal-inference prototypes, the importance decay algorithm — these are the result of months of benchmark-driven iteration. We keep the source closed because it's the core of what we're building as a business.

What isn't closed: your ability to run it. Dakera ships as a single binary and a Docker image that you deploy on your own infrastructure. You don't need to call our servers. Your data doesn't leave your infrastructure. What's closed is the source code, not the binary.

You can self-host Dakera without any dependency on our infrastructure. Pull the Docker image, set DAKERA_API_KEY, and you have a fully operational memory engine. The binary is yours to run. The source code is not public.

What this model is not

A few patterns we deliberately avoided:

  • Not "open veneer." We're not publishing a thin client library as "open source" while keeping the server that does the real work locked behind a paid API. The engine itself is a self-hosted binary you can run in isolation.
  • Not fully open source. The engine source is not public. If that's a hard requirement for your use case, Dakera may not be the right choice.
  • Not license-bait. The SDK/CLI/MCP licenses won't change. MIT means MIT, indefinitely.

The one-liner

Open where you integrate. Closed where performance lives.

The SDKs, CLI, and MCP server are the integration layer — they're yours. The engine is the product — it's ours. You deploy it, you run it, but we built the source that makes it fast.

Deploy Dakera on your infrastructure

Self-host the memory engine in under a minute. No external dependencies.