CrewAI Integration
Persistent, semantically-recalled memory for CrewAI agents. Your crews remember everything — across sessions, across restarts. Dakera handles embedding, storage, and retrieval server-side.
Package:
crewai-dakera · GitHub →Quick Start
1
Run Dakera
Dakera is a self-hosted memory server. Spin it up with Docker:
docker run -d \
--name dakera \
-p 3300:3300 \
-e DAKERA_ROOT_API_KEY=dk-mykey \
ghcr.io/dakera-ai/dakera:latest
# Verify it's running
curl http://localhost:3300/health
2
Install
# Core + integration
pip install crewai-dakera
# With CrewAI (if not already installed)
pip install "crewai-dakera[crewai]"
Requirements: Python ≥ 3.10, a running Dakera server.
3
Add memory to your crew
from crewai import Crew, Agent, Task
from crewai.memory import LongTermMemory
from crewai_dakera import DakeraStorage
storage = DakeraStorage(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="my-crew",
)
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
long_term_memory=LongTermMemory(storage=storage),
)
result = crew.kickoff(inputs={"topic": "AI trends"})
Your crew now persists everything it learns across runs.
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
api_url | str | — | Dakera server URL (e.g. http://localhost:3300) |
api_key | str | "" | API key set via DAKERA_ROOT_API_KEY |
agent_id | str | — | Logical identifier for this crew's memory namespace |
min_importance | float | 0.0 | Minimum importance score for recalled memories |
top_k | int | 5 | Number of memories to surface per turn |
Using environment variables
import os
from crewai_dakera import DakeraStorage
storage = DakeraStorage(
api_url=os.environ["DAKERA_URL"],
api_key=os.environ["DAKERA_API_KEY"],
agent_id="research-crew",
)
Full example — research crew
from crewai import Agent, Task, Crew, Process
from crewai.memory import LongTermMemory
from crewai_dakera import DakeraStorage
dakera = DakeraStorage(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="research-crew",
)
researcher = Agent(
role="Senior Researcher",
goal="Uncover groundbreaking insights in {topic}",
backstory="An expert researcher with decades of experience.",
verbose=True,
)
writer = Agent(
role="Content Writer",
goal="Craft compelling reports based on research findings",
backstory="A skilled writer who turns complex ideas into clear prose.",
verbose=True,
)
research_task = Task(
description="Research the latest developments in {topic}",
expected_output="A detailed research report",
agent=researcher,
)
write_task = Task(
description="Write a blog post based on the research",
expected_output="A polished 500-word article",
agent=writer,
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential,
memory=True,
long_term_memory=LongTermMemory(storage=dakera),
verbose=True,
)
# First run — learns and stores findings
result = crew.kickoff(inputs={"topic": "quantum computing"})
print(result.raw)
# Second run — recalls prior research automatically
result = crew.kickoff(inputs={"topic": "quantum computing advances"})
print(result.raw)
How it works
- After each task, CrewAI calls
DakeraStorage.save()with the result - Dakera embeds the content server-side and stores it with a semantic vector
- Before the next task, CrewAI calls
DakeraStorage.search()— Dakera performs hybrid search (vector + BM25) and returns the most relevant past memories - Memories decay gracefully over time based on access patterns — frequently-accessed memories stay prominent