Hippocortex - Quickstart Guide
Get your AI agent learning from experience in under 5 minutes.
1. Get Your API Key
- Go to dashboard.hippocortex.dev
- Sign up (free tier, no credit card required)
- Copy your API key (
hx_live_...)
2. Add Memory to Your Agent
Choose the method that fits your workflow. All methods run the full pipeline: capture, synthesize (semantic search, graph retrieval, collective brain, behavioral context), learn, and vault.
Gateway (Recommended) · Reliability: ~99%
Change your base URL. No SDK needed. Works with any OpenAI-compatible provider. If the memory layer is temporarily unavailable, requests fall through to your LLM provider directly.
Python:
from openai import OpenAI
client = OpenAI(
base_url="https://api.hippocortex.dev/v1",
api_key="hx_live_...",
default_headers={
"X-LLM-API-Key": "sk-...", # Your provider's API key
},
)
# Use exactly as before. Memory is automatic.
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Deploy the payment service to staging"}]
)
TypeScript:
import OpenAI from 'openai'
const client = new OpenAI({
baseURL: 'https://api.hippocortex.dev/v1',
apiKey: 'hx_live_...',
defaultHeaders: {
'X-LLM-API-Key': 'sk-...',
},
})
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Deploy the payment service to staging' }],
})
Works with OpenAI, Anthropic, Google Gemini, Groq, Together, Mistral, Fireworks, Ollama, and any OpenAI-compatible endpoint. See the Gateway guide for all provider examples.
Auto-Instrumentation · Reliability: ~95%
One import. Every OpenAI/Anthropic call gets memory. Zero config.
npm install @hippocortex/sdk@1.2.1 # or: pip install hippocortex==1.2.1
export HIPPOCORTEX_API_KEY=hx_live_...
TypeScript:
import '@hippocortex/sdk/auto'
import OpenAI from 'openai'
const openai = new OpenAI()
// This call now has persistent memory:
// - Past context is synthesized and injected
// - The conversation is captured for future learning
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Deploy the payment service to staging' }]
})
Python:
import hippocortex.auto
from openai import OpenAI
client = OpenAI()
# Every call now has memory, automatically
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Deploy the payment service to staging"}]
)
wrap() · Reliability: ~95%
Explicit, typed, per-client control over which clients get memory.
TypeScript:
import { wrap } from '@hippocortex/sdk'
import OpenAI from 'openai'
const openai = wrap(new OpenAI())
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Deploy the payment service to staging' }]
})
// Memory is injected and captured transparently
Python:
from hippocortex import wrap
from openai import OpenAI
client = wrap(OpenAI())
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Deploy the payment service to staging"}]
)
Manual Client · Reliability: ~95%
Use the Hippocortex client directly for custom agent loops.
TypeScript:
import { Hippocortex } from '@hippocortex/sdk';
const hx = new Hippocortex({ apiKey: process.env.HIPPOCORTEX_API_KEY! });
await hx.capture({
type: 'message',
sessionId: 'session-001',
payload: { role: 'user', content: 'Deploy the payment service to staging' }
});
await hx.capture({
type: 'tool_call',
sessionId: 'session-001',
payload: { toolName: 'kubectl', arguments: { action: 'apply', file: 'staging.yaml' } }
});
await hx.capture({
type: 'tool_result',
sessionId: 'session-001',
payload: { toolName: 'kubectl', output: 'deployment.apps/payments created', success: true }
});
Python:
from hippocortex import Hippocortex, CaptureEvent
import os
hx = Hippocortex(api_key=os.environ["HIPPOCORTEX_API_KEY"])
await hx.capture(CaptureEvent(
type="message",
session_id="session-001",
payload={"role": "user", "content": "Deploy the payment service to staging"}
))
await hx.capture(CaptureEvent(
type="tool_call",
session_id="session-001",
payload={"toolName": "kubectl", "arguments": {"action": "apply", "file": "staging.yaml"}}
))
await hx.capture(CaptureEvent(
type="tool_result",
session_id="session-001",
payload={"toolName": "kubectl", "output": "deployment.apps/payments created", "success": True}
))
3. Knowledge Compilation (Automatic)
The compilation pipeline runs automatically after every 10 captured events, with a 5-minute sweep to catch stragglers.
The compiler extracts:
- Task schemas: recurring procedures ("how to deploy")
- Failure playbooks: error patterns and resolutions
- Causal patterns: cause-effect relationships
- Decision policies: preferences and rules
To trigger compilation manually (for testing):
TypeScript:
const result = await hx.learn();
console.log(`Found ${result.stats.patternsFound} patterns`);
console.log(`Created ${result.artifacts.created} artifacts`);
Python:
result = await hx.learn()
print(f"Found {result.stats.patterns_found} patterns")
print(f"Created {result.artifacts.created} artifacts")
4. Synthesize Context
When your agent needs to make a decision, synthesize compressed context from all memory layers:
TypeScript:
const context = await hx.synthesize('deploy payment service to staging', {
maxTokens: 2000
});
const systemPrompt = `You are a deployment assistant.
Relevant knowledge from past experience:
${context.entries.map(e => `[${e.section}] ${e.content}`).join('\n')}
Respond to the user's request.`;
Python:
context = await hx.synthesize("deploy payment service to staging", max_tokens=2000)
knowledge = "\n".join(
f"[{e.section}] {e.content}" for e in context.entries
)
system_prompt = f"""You are a deployment assistant.
Relevant knowledge from past experience:
{knowledge}
Respond to the user's request."""
5. That's It!
Your agent now has memory that learns from experience. Every interaction gets captured, patterns get compiled, and context gets synthesized for future decisions.
What happens over time:
Session 1: Agent deploys service. Captures steps.
Session 2: Agent deploys again. Captures variations.
learn() finds the "deploy" pattern.
Session 3: Agent asked to deploy. synthesize() returns:
"Procedure: 1) Check CI 2) Run migrations 3) Deploy 4) Smoke test"
"Known issue: skip migrations leads to 500 errors"
The more experience you capture, the better the synthesized context becomes.
Examples
Working code for common frameworks:
- OpenAI Agents — TypeScript deployment assistant with tool calls
- LangGraph — Python customer support agent with state graph
- CrewAI — Python code review crew with role-based agents
- Agent Demo — Internal architecture demo (full memory lifecycle)
Next Steps
- Gateway Guide — Recommended zero-code integration
- API Reference — Full endpoint documentation
- Integration Guide — Framework-specific patterns
- Examples — Working code for OpenAI, LangGraph, CrewAI