Onboarding: LangGraph
Add persistent memory to your LangGraph agent. The adapter wraps your compiled graph to capture node transitions and inject past context automatically.
Best path: Gateway (for LLM calls) or Python SDK adapter (for node-level capture) Time: 2–5 minutes Reliability: ~99% (Gateway) / ~80% (adapter)
Option A: Gateway (Recommended for LLM calls)
If your LangGraph nodes use the OpenAI SDK for LLM calls, the Gateway is the simplest integration. Change the base URL in your LLM client configuration:
from openai import OpenAI
client = OpenAI(
base_url="https://api.hippocortex.dev/v1",
api_key="hx_live_...",
default_headers={"X-LLM-API-Key": "sk-..."},
)
Every LLM call gets memory automatically. ~99% reliability with graceful fallback.
Option B: SDK Adapter (For node-level capture)
Step 1: Install
pip install hippocortex[langgraph]
Step 2: Set your API key
export HIPPOCORTEX_API_KEY=hx_live_...
Get a key at dashboard.hippocortex.dev if you do not have one.
Step 3: Wrap your graph
from hippocortex.adapters import langgraph as hx_langgraph
# Build your graph normally
graph = builder.compile()
# Add memory with one line
graph = hx_langgraph.wrap(graph, api_key="hx_live_...")
# Use exactly as before
result = await graph.ainvoke({
"messages": [{"role": "user", "content": "Deploy the API"}]
})
Step 4: Verify
Check dashboard.hippocortex.dev to see captured events. You should see node transitions, tool calls, and final outputs.
What gets captured
- User messages (extracted from state)
- Node transitions (during
astream) - Final assistant responses
- Tool invocations within nodes
Auto-compile
Knowledge compilation runs automatically after every 10 captures. You do not need to call learn() manually.