AutoGen Integration
Persistent, semantically-recalled memory for AutoGen agents. Your agents remember everything — across sessions, across restarts. Dakera handles embedding, storage, and retrieval server-side.
Package:
autogen-dakera · GitHub →Quick Start
1
Run Dakera
docker run -d \
--name dakera \
-p 3300:3300 \
-e DAKERA_ROOT_API_KEY=dk-mykey \
ghcr.io/dakera-ai/dakera:latest
curl http://localhost:3300/health
2
Install
# Core + integration
pip install autogen-dakera
# With AutoGen (if not already installed)
pip install "autogen-dakera[autogen]"
Requirements: Python ≥ 3.10, a running Dakera server.
3
Add memory to your agent
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_dakera import DakeraMemory
memory = DakeraMemory(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="my-agent",
)
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="assistant",
model_client=model_client,
memory=[memory],
)
# Agent now persists what it learns across sessions
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
api_url | str | — | Dakera server URL (e.g. http://localhost:3300) |
api_key | str | "" | API key set via DAKERA_ROOT_API_KEY |
agent_id | str | — | Logical identifier for this agent's memory |
min_importance | float | 0.0 | Minimum importance score for recalled memories |
top_k | int | 5 | Number of memories to surface per query |
Multi-agent team with shared memory
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_dakera import DakeraMemory
async def main():
shared_memory = DakeraMemory(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="research-team",
top_k=8,
)
model_client = OpenAIChatCompletionClient(model="gpt-4o")
researcher = AssistantAgent(
name="researcher",
model_client=model_client,
memory=[shared_memory],
system_message="You are a research expert. Remember key findings.",
)
analyst = AssistantAgent(
name="analyst",
model_client=model_client,
memory=[shared_memory],
system_message="You are a data analyst. Build on what the researcher found.",
)
team = RoundRobinGroupChat(
[researcher, analyst],
termination_condition=MaxMessageTermination(max_messages=6),
)
# First session — agents learn and store
result = await team.run(task="Research AI memory architectures")
print(result.messages[-1].content)
# Later session — agents recall prior research
result = await team.run(task="What do we know about transformer memory?")
print(result.messages[-1].content)
asyncio.run(main())
How it works
- During conversation, AutoGen calls
DakeraMemory.add()with new messages - Dakera embeds the content server-side and stores it with a semantic vector
- Before each agent response, AutoGen calls
DakeraMemory.query()— Dakera performs hybrid search and returns the most relevant past memories - Memories are injected into the agent's context automatically