Self-hosted. Free forever.
Dakera is open core and self-hosted. Run the full memory server on your own infrastructure — no usage fees, no API keys to manage, no vendor lock-in.
Community
Everything you need to add persistent memory to your AI agents. Self-hosted with full control.
Custom
For teams that need dedicated support, custom deployments, or enterprise-grade SLAs.
Frequently asked questions
Is Dakera really free?
Yes. The memory engine is proprietary but free to self-host — no usage fees, no time limits. SDKs, CLI, and MCP server are MIT-licensed and open source on GitHub.
What does "open core" mean?
The integration layer — SDKs, CLI, and MCP server — is MIT-licensed and open source. The memory engine is proprietary but free to self-host. Enterprise features (like advanced auth, multi-tenant isolation, and managed deployment) may be offered as paid additions in the future.
Do I need to pay for embeddings?
No. Dakera includes a built-in embedding model (ONNX-based). You never need an external embedding API or pay for embedding tokens.
Can I use Dakera in production?
Absolutely. Dakera is designed for production workloads. It scores 87.6% on the LoCoMo benchmark, uses a single Rust binary with zero external dependencies, and is battle-tested in multi-agent systems.
Where does my data live?
On your servers. Dakera is self-hosted — your data never leaves your infrastructure. Storage uses local files (RocksDB + HNSW indexes), no external database required.
How do I get support?
Community users get support via GitHub Issues and Discussions. Enterprise customers get a private support channel with guaranteed response times.