SDK Reference
Complete reference for the Hippocortex SDK. Available for TypeScript/JavaScript and Python.
Installation
TypeScript / JavaScript
npm install @hippocortex/sdk
# or
yarn add @hippocortex/sdk
# or
pnpm add @hippocortex/sdkPython
pip install hippocortex
# or
poetry add hippocortexConfiguration
The SDK client is configured through the HippocortexConfig object (TypeScript) or constructor arguments (Python).
HippocortexConfig
apiKeyrequiredstringYour API key. Starts with hx_live_ (production) or hx_test_ (development). Get one at dashboard.hippocortex.dev.baseUrlstringAPI base URL. Defaults to https://api.hippocortex.dev/v1. Override for self-hosted deployments.timeoutMsnumberRequest timeout in milliseconds. Defaults to 30000 (30 seconds).import { Hippocortex } from '@hippocortex/sdk';
const hx = new Hippocortex({
apiKey: 'hx_live_abc123...',
baseUrl: 'https://api.hippocortex.dev/v1', // optional
timeoutMs: 30000, // optional
});from hippocortex import Hippocortex
hx = Hippocortex(
api_key="hx_live_abc123...",
base_url="https://api.hippocortex.dev/v1", # optional
timeout=30.0, # optional, seconds
)Hippocortex Class
The main client class. All methods are async. In Python, a synchronous wrapper is available via SyncHippocortex.
capture(event)
Capture a single agent event into Hippocortex memory. Events are queued asynchronously for processing. The API returns a 202 immediately after accepting the event.
If an idempotencyKey is provided in the event metadata, duplicate submissions with the same key will be detected and skipped.
CaptureEvent
typerequiredCaptureEventTypeThe event type. One of: message, tool_call, tool_result, file_edit, test_run, command_exec, browser_action, api_result.sessionIdrequiredstringSession identifier grouping related events. Use a consistent ID for events that belong to the same agent conversation or task.payloadrequiredRecord<string, unknown>Event-specific data. Structure depends on the event type (see event types below).metadataRecord<string, unknown>Optional metadata such as agentId, source, environment, or custom fields.Event Types
messageUser or assistant messages. Payload: { role, content }tool_callAgent tool invocations. Payload: { tool_name, input }tool_resultTool execution results. Payload: { tool_name, output }file_editFile modifications. Payload: { path, diff, action }test_runTest execution results. Payload: { suite, passed, failed }command_execShell command executions. Payload: { command, exitCode, output }browser_actionBrowser automation actions. Payload: { url, action, selector }api_resultExternal API call results. Payload: { url, method, status, body }CaptureResult
eventIdstringUnique identifier for the captured event.status'ingested' | 'duplicate'Whether the event was newly ingested or detected as a duplicate.salienceScorenumberOptional relevance score (0 to 1) assigned during ingestion.traceIdstringOptional trace identifier for request tracking.reasonstringOptional explanation when the event is flagged or deduplicated.const result = await hx.capture({
type: 'message',
sessionId: 'sess-42',
payload: { role: 'user', content: 'Deploy to staging' },
metadata: { agentId: 'deploy-bot' },
});
// => { eventId: 'evt_...', status: 'ingested' }captureBatch(events)
Capture multiple events in a single HTTP request. Accepts an array of CaptureEvent objects. Maximum batch size is 1000 events. Events are validated individually; valid events are queued even if some fail validation.
BatchCaptureResult
resultsCaptureResult[]Individual result for each successfully queued event.summary.totalnumberTotal number of events in the batch.summary.ingestednumberNumber of events successfully ingested.summary.duplicatesnumberNumber of duplicate events detected.summary.errorsnumberNumber of events that failed validation.const result = await hx.captureBatch([
{ type: 'message', sessionId: 's1', payload: { role: 'user', content: 'Hello' } },
{ type: 'tool_call', sessionId: 's1', payload: { tool_name: 'search', input: 'query' } },
{ type: 'tool_result', sessionId: 's1', payload: { tool_name: 'search', output: 'results...' } },
]);
// => { results: [...], summary: { total: 3, ingested: 3, duplicates: 0, errors: 0 } }learn(options?)
Trigger the Memory Compiler to process accumulated events into semantic memories and compile them into reusable knowledge artifacts. Returns a 202 with a job reference; compilation runs asynchronously.
LearnOptions
scope'full' | 'incremental'Compilation scope. 'incremental' processes only events since the last run. 'full' recompiles everything from scratch. Defaults to 'incremental'.minPatternStrengthnumberMinimum pattern strength threshold (0 to 1). Patterns below this confidence are discarded. Useful for filtering noise.artifactTypesArtifactType[]Which artifact types to extract. Options: task_schema, failure_playbook, causal_pattern, decision_policy. Defaults to all types.Artifact Types
task_schemaLearned procedures and step sequences for recurring tasks.failure_playbookError patterns and recovery strategies compiled from past failures.causal_patternCause-and-effect relationships identified across sessions.decision_policyDecision criteria and preferences extracted from agent behavior.LearnResult
runIdstringUnique identifier for this compilation run.status'completed' | 'partial' | 'failed'Outcome of the compilation job.artifacts.creatednumberNumber of new artifacts created.artifacts.updatednumberNumber of existing artifacts updated with new evidence.artifacts.unchangednumberNumber of artifacts that remained unchanged.artifacts.byTypeRecord<string, number>Artifact counts grouped by type.stats.memoriesProcessednumberTotal memories processed during compilation.stats.patternsFoundnumberNumber of patterns identified.stats.compilationMsnumberTotal compilation time in milliseconds.// Incremental compilation (default)
const result = await hx.learn();
// Full recompilation with filters
const result = await hx.learn({
scope: 'full',
minPatternStrength: 0.7,
artifactTypes: ['task_schema', 'failure_playbook'],
});synthesize(query, options?)
Synthesize compressed context from all memory layers for a given query. Returns relevant memories and artifacts packed within a token budget. This is the primary retrieval method for enriching agent prompts with experiential knowledge.
SynthesizeOptions
maxTokensnumberToken budget for the output. Defaults to 4000. Entries are prioritized by relevance and packed until the budget is exhausted.sectionsReasoningSection[]Which reasoning sections to include. Options: procedures, failures, decisions, facts, causal, context. Defaults to all.minConfidencenumberMinimum confidence threshold (0 to 1). Entries below this score are excluded. Defaults to 0.3.includeProvenancebooleanWhether to attach source references to each entry. Defaults to true.SynthesizeResult
packIdstringUnique identifier for this context pack.entriesSynthesisEntry[]Array of context entries, sorted by relevance.budget.limitnumberToken budget that was requested.budget.usednumberTokens actually consumed.budget.compressionRationumberRatio of original content to compressed output.budget.entriesIncludednumberNumber of entries included in the pack.budget.entriesDroppednumberNumber of entries dropped due to budget constraints.SynthesisEntry
sectionReasoningSectionWhich reasoning section this entry belongs to.contentstringThe synthesized content text.confidencenumberConfidence score from 0.0 to 1.0.provenanceProvenanceRef[]Optional array of source references for traceability.ProvenanceRef
sourceTypestringType of the source (e.g. 'memory', 'artifact').sourceIdstringID of the source entity.artifactTypestringIf source is an artifact, which type it is.evidenceCountnumberNumber of evidence items supporting this reference.const ctx = await hx.synthesize('deploy payment service', {
maxTokens: 4000,
sections: ['procedures', 'failures'],
minConfidence: 0.5,
includeProvenance: true,
});
for (const entry of ctx.entries) {
console.log(`[${entry.section}] (confidence: ${entry.confidence})`);
console.log(entry.content);
}listArtifacts(options?)
List compiled knowledge artifacts with filtering, sorting, and cursor-based pagination.
ArtifactListOptions
typeArtifactTypeFilter by artifact type: task_schema, failure_playbook, causal_pattern, or decision_policy.statusArtifactStatusFilter by status: active, deprecated, or superseded.sortArtifactSortFieldSort field: createdAt, updatedAt, confidence, or evidenceCount.order'asc' | 'desc'Sort order. Defaults to descending.limitnumberMaximum number of artifacts to return (max 100). Defaults to 50.cursorstringPagination cursor from a previous response for fetching the next page.ArtifactListResult
artifactsArtifact[]Array of artifact objects.pagination.hasMorebooleanWhether more results are available.pagination.cursorstringCursor to pass for the next page of results.pagination.totalnumberTotal number of matching artifacts.// List all active failure playbooks
const result = await hx.listArtifacts({
type: 'failure_playbook',
status: 'active',
sort: 'confidence',
order: 'desc',
limit: 10,
});
// Paginate through results
let cursor: string | undefined;
do {
const page = await hx.listArtifacts({ cursor });
for (const artifact of page.artifacts) {
console.log(artifact.title);
}
cursor = page.pagination.cursor;
} while (cursor);getArtifact(id)
Retrieve a single compiled artifact by its ID. Returns the full artifact with content, metadata, and source references.
Artifact
idstringUnique artifact identifier.typeArtifactTypeArtifact type: task_schema, failure_playbook, causal_pattern, or decision_policy.statusArtifactStatusCurrent status: active, deprecated, or superseded.titlestringHuman-readable title describing the artifact.contentRecord<string, unknown>The compiled knowledge content. Structure varies by artifact type.confidencenumberOverall confidence score (0 to 1) based on evidence strength.evidenceCountnumberNumber of source events supporting this artifact.createdAtstringISO 8601 creation timestamp.updatedAtstringISO 8601 last-updated timestamp.getMetrics(options?)
Retrieve usage and performance metrics for your account. Useful for monitoring event volumes, compilation activity, synthesis usage, and quota consumption.
MetricsOptions
period'1h' | '24h' | '7d' | '30d'Time period to aggregate metrics over.granularity'minute' | 'hour' | 'day'Granularity of the time series data.MetricsResult
period.startstringISO 8601 start of the metrics period.period.endstringISO 8601 end of the metrics period.usage.eventsobjectEvent ingestion stats: total, ingested, duplicates, errors, byType.usage.compilationsobjectCompilation stats: total, artifactsCreated, artifactsUpdated.usage.synthesesobjectSynthesis stats: total, avgTokensUsed, avgCompressionRatio.quota.planstringCurrent plan name.quota.eventsLimitnumberMaximum events allowed in the billing period.quota.eventsUsednumberEvents consumed so far.quota.eventsRemainingnumberEvents remaining in the current period.quota.resetDatestringISO 8601 date when the quota resets.const metrics = await hx.getMetrics({ period: '24h' });
console.log(`Events today: ${metrics.usage.events.total}`);
console.log(`Quota remaining: ${metrics.quota.eventsRemaining}`);Error Handling
All API errors throw a HippocortexError (TypeScript) or raise a HippocortexError exception (Python).
HippocortexError
codestringMachine-readable error code (e.g. validation_error, unauthorized, rate_limited, not_found).messagestringHuman-readable error message.statusCodenumberHTTP status code of the response.detailsunknown[]Optional array of additional error details, e.g. field-level validation errors.import { Hippocortex, HippocortexError } from '@hippocortex/sdk';
try {
await hx.capture(event);
} catch (err) {
if (err instanceof HippocortexError) {
console.error(`API error: ${err.code} (HTTP ${err.statusCode})`);
console.error(err.message);
if (err.code === 'rate_limited') {
// Back off and retry
}
if (err.code === 'validation_error') {
console.error('Details:', err.details);
}
}
}Common Error Codes
validation_error422Invalid request body or missing required fields.unauthorized401Invalid or missing API key.rate_limited429Too many requests. Check Retry-After header.not_found404Requested resource does not exist.conflict409Resource already exists (e.g. duplicate account).internal_error500Server-side error. Retry with exponential backoff.Framework Adapters
The SDK includes pre-built adapters that simplify integration with agent frameworks. Adapters handle event capture and context injection automatically with a fire-and-forget design: they never block the agent, and all errors are swallowed with warnings.
OpenClaw Adapter (TypeScript)
The OpenClaw adapter provides a message-oriented API with automatic memory injection.
import { autoMemory } from '@hippocortex/sdk/adapters';
const memory = autoMemory({
apiKey: process.env.HIPPOCORTEX_API_KEY!,
injectMemory: true, // auto-synthesize context on user messages
captureMessages: true, // capture message events
captureTools: true, // capture tool call/result events
sessionId: 'custom-id', // optional custom session ID
timeoutMs: 10000, // adapter uses a lower timeout (10s default)
});
// Check if the adapter has a valid API key
if (memory.enabled) {
console.log(`Session: ${memory.sessionId}`);
}
// Process user message: captures + returns context
const context = await memory.onMessage('Deploy to staging', 'user');
// Process assistant response: captures it
await memory.onResponse('Deploying to staging now...');
// Capture tool interactions
await memory.onToolCall('kubectl', { command: 'apply -f deploy.yaml' });
await memory.onToolResult('kubectl', 'deployment.apps/payments created');
// Inject context into a messages array (prepends system message)
const enrichedMessages = await memory.injectIntoMessages(messages, query);Python Adapters
Python adapters are available for popular agent frameworks:
LangGraphIntegration with LangGraph agent workflows.CrewAIMemory adapter for CrewAI multi-agent systems.AutoGenMemory integration for Microsoft AutoGen agents.from hippocortex.adapters import LangGraphMemory
memory = LangGraphMemory(api_key="hx_live_...")
# Use as a LangGraph tool or callback
# See adapter documentation for framework-specific integrationPython SDK
The Python SDK mirrors the TypeScript SDK with Pythonic naming conventions. It is async-first with a synchronous wrapper available.
from hippocortex import Hippocortex, CaptureEvent
async with Hippocortex(api_key="hx_live_...") as hx:
# Capture an event
result = await hx.capture(CaptureEvent(
type="message",
session_id="sess-1",
payload={"role": "user", "content": "Hello"},
))
# Trigger learning
learn_result = await hx.learn()
# Synthesize context
ctx = await hx.synthesize("deployment procedures")
# List artifacts
artifacts = await hx.list_artifacts(type="task_schema")
# Get metrics
metrics = await hx.get_metrics(period="24h")from hippocortex import SyncHippocortex, CaptureEvent
with SyncHippocortex(api_key="hx_live_...") as hx:
result = hx.capture(CaptureEvent(
type="message",
session_id="sess-1",
payload={"role": "user", "content": "Hello"},
))
ctx = hx.synthesize("deployment procedures")