Onboarding: Anthropic
Add persistent memory to your Anthropic Claude-powered application. Works with the Anthropic Python and TypeScript SDKs.
Best path: Gateway (change one URL) or SDK (one import) Time: 2 minutes Reliability: ~99% (Gateway) / ~95% (SDK)
Option A: Gateway (Recommended)
The Gateway speaks the OpenAI chat completions protocol. To use it with Anthropic models, you point the Gateway at Anthropic's API.
Step 1: Get an API key
Sign up at dashboard.hippocortex.dev. Copy your API key (hx_live_...).
Step 2: Use the OpenAI SDK pointed at Anthropic
from openai import OpenAI
client = OpenAI(
base_url="https://api.hippocortex.dev/v1",
api_key="hx_live_...",
default_headers={
"X-LLM-API-Key": "sk-ant-...",
"X-LLM-Base-URL": "https://api.anthropic.com",
},
)
response = client.chat.completions.create(
model="claude-sonnet-4-20250514",
messages=[{"role": "user", "content": "Hello!"}],
)
Step 3: Verify
Check dashboard.hippocortex.dev to see captured events.
Option B: SDK
The Hippocortex SDK supports the native Anthropic SDK through auto-instrumentation.
Step 1: Install
pip install hippocortex==1.2.1
# or
npm install @hippocortex/sdk@1.2.1
Step 2: Set your API key
export HIPPOCORTEX_API_KEY=hx_live_...
Step 3: Add one import
Python:
import hippocortex.auto
from anthropic import Anthropic
client = Anthropic()
# Every call now has memory.
TypeScript:
import '@hippocortex/sdk/auto'
import Anthropic from '@anthropic-ai/sdk'
const anthropic = new Anthropic()
// Every call now has memory.
What to expect
After a few interactions, your agent starts receiving synthesized context from past sessions. The compilation pipeline runs automatically in the background.