"Open core" has become one of those terms that means whatever the company using it wants it to mean. Some projects slap an open-source license on a read-only mirror and call it a day. Others open-source everything except the billing page. We want to be precise about what Dakera opens, what it keeps closed, and why.
This post is the definitive reference. If you're evaluating Dakera and need to know exactly what you can inspect, fork, and redistribute — this is it.
The Exact Breakdown
The short version: Everything you touch as a developer — SDKs, CLI, MCP server, deployment configs, benchmarks — is MIT-licensed and open source. The engine binary that runs on your infrastructure is proprietary but free to self-host forever.
Why SDKs and Tooling Are Open
Client libraries are the interface between your code and Dakera. There are three reasons these must be open:
Trust through inspection. When you install dakera-py or dakera-js into your application, you're adding a dependency that handles your data. You should be able to read every line. You should be able to verify that the SDK doesn't phone home, doesn't collect telemetry, doesn't buffer data anywhere unexpected. The MIT license guarantees you can always do this — not just today, but permanently.
Contributions and ecosystem velocity. SDK bugs affect every user. When someone hits an edge case in the Go SDK's connection pooling or finds a race condition in the Rust client, they can open a PR and fix it the same day. We've merged community fixes within hours of being reported. A proprietary SDK would bottleneck every fix through our team's sprint cycle.
Forkability as insurance. If Dakera disappears tomorrow, every SDK continues to work. You can fork dakera-py, point it at a compatible API, and keep running. This isn't hypothetical — it's the safety net that lets teams adopt infrastructure dependencies with confidence.
The MCP Server Specifically
The MCP (Model Context Protocol) server deserves special mention. MCP is the protocol that AI tools — Claude, Cursor, Windsurf, and others — use to interact with external services. When an AI agent connects to dakera-mcp, it's executing code with access to your memory store. That code must be auditable. Full stop.
We open-sourced the MCP server under MIT not because it was easy (it required separating protocol handling from engine internals) but because the alternative — asking developers to trust a black box sitting between their AI tools and their data — is fundamentally incompatible with how security-conscious teams operate.
Deployment Configs and Benchmarks
The Helm charts (dakera-helm) and Docker Compose files (dakera-deploy) are MIT because infrastructure-as-code should be modifiable. If you need to add a sidecar, change resource limits, or integrate with your existing monitoring stack, you shouldn't need to reverse-engineer our deployment assumptions.
The benchmark suite (dakera-bench) is MIT because claims without reproducible methodology are marketing, not engineering. Anyone can run our benchmarks, verify our numbers, or extend them with their own test cases. When we publish benchmark results, you can check our work.
Why the Engine Is Closed
The memory engine — the binary that handles storage, retrieval, indexing, ranking, temporal inference, knowledge graph traversal, and decay — is proprietary. Here's the honest reasoning:
Quality control. Memory systems are deceptively complex. A subtle bug in temporal ranking or entity extraction doesn't crash the system — it returns slightly wrong results that compound over time. We need complete control over the engine's release cadence, testing requirements, and quality gates. Our internal CI runs 1,540 benchmark questions across three difficulty categories before every release. That level of rigor is easier to maintain with a single team owning the codebase.
Competitive moat. The engine is years of work on hybrid retrieval, BM25 tokenization, HNSW indexing, cross-encoder reranking, ML classification, and temporal inference. Open-sourcing it would let well-funded competitors fork the engine, strip attribution, and offer it as a managed service. We're a small team. We need the engine to remain our differentiation.
Sustainable business model. Open core works when the open parts drive adoption and the closed parts drive revenue. If everything is open, the only business model is support contracts or managed hosting with no differentiation. By keeping the engine proprietary, we can offer a future Dakera Cloud that provides genuine value (managed operations, automatic upgrades, global replication) without competing against forks of our own code.
Important distinction: Proprietary does not mean paid. The engine binary is free to download, free to self-host, and free to use in production. There are no usage limits, no seat restrictions, no feature gates between "free" and "enterprise" tiers. You get the full engine.
What "Free to Self-Host" Actually Means
We use this phrase deliberately and want to be precise about what it includes:
- No usage limits. No cap on memories stored, queries per second, namespaces created, or agents connected. The engine runs at whatever scale your hardware supports.
- No telemetry. The engine binary does not phone home. It does not report usage statistics. It does not check license validity against a remote server. It runs fully air-gapped if you want.
- No time bombs. There is no expiry date, no "trial period," no degradation after N days. The binary you download today works forever.
- No feature gating. Every feature in the engine — hybrid search, knowledge graphs, temporal inference, ML classification, entity extraction, decay systems — is available in the self-hosted binary. We do not hold back capabilities for a paid tier.
- Updates are optional. You can run any version indefinitely. We publish new releases regularly, but upgrading is always your choice on your timeline.
The deployment is straightforward:
# Pull and run — that's it
docker run -d -p 3300:3300 ghcr.io/dakera-ai/dakera:latest
# Or use our Docker Compose for production setups
git clone https://github.com/dakera-ai/dakera-deploy
cd dakera-deploy
docker compose up -d
No license key to paste. No registration form. No "contact sales for production use." Pull the image, run it, use it.
How This Compares to Alternatives
The AI memory space has several players, each with a different licensing approach. Here's how we see the landscape:
Mem0 started fully open source (Apache 2.0 for the core), which built rapid adoption. Their business model relies on their managed cloud platform for revenue. The downside: anyone can fork and compete on hosting, which creates pressure to add proprietary features over time. Credit to them for the open approach — it worked for community building. The risk is long-term sustainability without a clear moat.
Zep initially had open-source components but has moved increasingly toward a proprietary, cloud-first model. Their Community Edition became more limited over time while the paid Cloud product gained features. This is the pattern that erodes developer trust — start open, gradually close. We've chosen to be explicit from day one about what's open and what isn't, so there's never a "rug pull" moment.
Letta (formerly MemGPT) takes a research-oriented open approach with their core framework being open source. Different trade-off: great for experimentation and academic use, but production hardening and enterprise features often require the commercial offering.
Our position is deliberately between these extremes:
- We don't pretend everything is open (and risk closing things later)
- We don't hide the client-facing code (and ask for blind trust)
- We don't gate features behind payment (and force "contact sales" conversations)
- We're transparent from day one about exactly where the line is drawn
The Commitment
Everything currently MIT-licensed will remain MIT-licensed. We will not relicense open repositories to a more restrictive license. We will not move features from open repos into proprietary ones. We will not introduce telemetry or phone-home behavior into the engine binary.
If we ever launch Dakera Cloud (managed hosting), it will be a convenience layer — not a capability gate. Self-hosted Dakera will always have feature parity with the cloud engine. The cloud product's value will be operational: managed upgrades, automated backups, global replication, uptime SLAs. Not withheld features.
This is the model we believe is honest: open at the edges where developers need trust and control, closed at the core where we need sustainability and quality. No games, no gradual enclosure, no bait-and-switch. The line is drawn here, publicly, and it stays.
Questions about licensing? Every open-source repository includes its LICENSE file (MIT). The engine's usage terms are in the self-hosting documentation. If anything is unclear, open an issue on any of our repos or reach us at [email protected].