Onboarding: AutoGen
Add persistent memory to your AutoGen agents. The adapter captures multi-agent conversations and injects past context into reply hooks.
Best path: Gateway (for LLM calls) or Python SDK adapter (for agent-level capture) Time: 2–5 minutes Reliability: ~99% (Gateway) / ~80% (adapter)
Option A: Gateway (Recommended for LLM calls)
If your AutoGen agents use OpenAI or another provider for LLM calls, the Gateway is the simplest integration:
from openai import OpenAI
client = OpenAI(
base_url="https://api.hippocortex.dev/v1",
api_key="hx_live_...",
default_headers={"X-LLM-API-Key": "sk-..."},
)
Every LLM call gets memory automatically. ~99% reliability with graceful fallback.
Option B: SDK Adapter (For agent-level capture)
Step 1: Install
pip install hippocortex[autogen]
Step 2: Set your API key
export HIPPOCORTEX_API_KEY=hx_live_...
Get a key at dashboard.hippocortex.dev if you do not have one.
Step 3: Wrap your agents
from autogen import AssistantAgent, UserProxyAgent
from hippocortex.adapters import autogen as hx_autogen
# Create your agents normally
assistant = AssistantAgent("assistant", llm_config={...})
user_proxy = UserProxyAgent("user")
# Add memory with one line
assistant = hx_autogen.wrap(assistant, api_key="hx_live_...")
# Use exactly as before
user_proxy.initiate_chat(assistant, message="Deploy to staging")
Step 4: Verify
Check dashboard.hippocortex.dev to see captured conversations between agents.
What gets captured
- Inter-agent messages
- Function calls and results
- Conversation turns
- Final outcomes
Note on multi-agent setups
If you have multiple agents, wrap only the primary assistant to avoid duplicate captures. The reply hook captures both sides of each conversation turn.
Auto-compile
Knowledge compilation runs automatically after every 10 captures. You do not need to call learn() manually.