TypeScript SDK Reference

Complete API reference for @hippocortex/sdk.


Installation

npm install @hippocortex/sdk
# or
pnpm add @hippocortex/sdk
# or
yarn add @hippocortex/sdk

Requires Node.js 18+ (uses native fetch).


Integration Methods

The SDK provides three ways to add memory, from simplest to most granular:

Auto-Instrumentation (Easiest, 1 Line)

Import once at your app entry point. Every OpenAI and Anthropic SDK call automatically gets memory context injection and conversation capture. Sentry-style monkey-patching.

import '@hippocortex/sdk/auto'

// That's it. All OpenAI/Anthropic calls now have memory.
import OpenAI from 'openai'
const openai = new OpenAI()

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Deploy payments to staging' }]
})
// Memory context was injected, conversation was captured

How it works: On import, the module patches Completions.prototype.create (OpenAI) and Messages.prototype.create (Anthropic). Each call synthesizes relevant context, prepends it as a system message, calls the original method, then captures the conversation.

Configuration: Resolves credentials from environment variables (HIPPOCORTEX_API_KEY, HIPPOCORTEX_BASE_URL) or a .hippocortex.json file. Set HIPPOCORTEX_SILENT=1 to suppress console output.

Streaming support: Auto-instrumentation wraps streaming responses to collect chunks transparently. The stream passes through unchanged to your code.

wrap() (Recommended)

Transparently wrap an OpenAI or Anthropic client instance. Explicit, typed, per-client control.

import { wrap } from '@hippocortex/sdk'
import OpenAI from 'openai'

const openai = wrap(new OpenAI())

// Use exactly as before. Memory is transparent.
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Deploy payments to staging' }]
})

Options:

wrap(client, {
  apiKey?: string,     // Falls back to env var or .hippocortex.json
  baseUrl?: string,    // Hippocortex API URL
  sessionId?: string,  // Explicit session ID (auto-generated if omitted)
})

Works with both SDKs:

import Anthropic from '@anthropic-ai/sdk'
const anthropic = wrap(new Anthropic())

const response = await anthropic.messages.create({
  model: 'claude-sonnet-4-20250514',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Deploy payments to staging' }]
})

Fault tolerance: If Hippocortex is unreachable, all calls pass through to the original client unchanged. Your application never breaks because of memory.

Manual Client (Advanced)

Full control over capture, learn, and synthesize. Best for custom agent loops or non-OpenAI/Anthropic LLMs.

import { Hippocortex } from '@hippocortex/sdk';

const hx = new Hippocortex({
  apiKey: 'hx_live_your_key_here',
  baseUrl: 'https://api.hippocortex.dev/v1', // optional
  timeoutMs: 30000,                           // optional
});

Configuration

interface HippocortexConfig {
  /** API key (hx_live_... or hx_test_...) */
  apiKey: string;
  /** Base URL (default: https://api.hippocortex.dev/v1) */
  baseUrl?: string;
  /** Request timeout in milliseconds (default: 30000) */
  timeoutMs?: number;
}

Zero-Config

Both auto and wrap() resolve configuration automatically using the following priority:

  1. Explicit options passed to wrap() or new Hippocortex()
  2. Environment variables: HIPPOCORTEX_API_KEY, HIPPOCORTEX_BASE_URL
  3. .hippocortex.json file (searched from cwd upward to filesystem root)

.hippocortex.json

{
  "apiKey": "hx_live_your_key_here",
  "baseUrl": "https://api.hippocortex.dev/v1"
}

Create this file manually in your project root. Add .hippocortex.json to your .gitignore.


Core Methods

capture(event)

Capture a single agent event into Hippocortex memory.

Signature:

async capture(event: CaptureEvent): Promise<CaptureResult>

Parameters:

FieldTypeRequiredDescription
typeCaptureEventTypeYesEvent type (see below)
sessionIdstringYesSession identifier
payloadRecord<string, unknown>YesEvent data
metadataRecord<string, unknown>NoAdditional context

Event Types: "message" | "tool_call" | "tool_result" | "file_edit" | "test_run" | "command_exec" | "browser_action" | "api_result"

Returns:

interface CaptureResult {
  eventId: string;            // Unique event ID
  status: "ingested" | "duplicate";
  salienceScore?: number;     // 0.0 to 1.0
  traceId?: string;           // Processing trace ID
  reason?: string;            // Duplicate reason (if duplicate)
}

Example:

const result = await hx.capture({
  type: 'tool_call',
  sessionId: 'sess-42',
  payload: {
    tool: 'deploy',
    args: { service: 'api', environment: 'staging' }
  },
  metadata: {
    agentId: 'agent-1',
    correlationId: 'corr-123'
  }
});

console.log(`Event ${result.eventId}: ${result.status}`);
console.log(`Salience: ${result.salienceScore}`);

captureBatch(events)

Capture multiple events in a single request (up to 100 events).

Signature:

async captureBatch(events: CaptureEvent[]): Promise<BatchCaptureResult>

Returns:

interface BatchCaptureResult {
  results: CaptureResult[];
  summary: {
    total: number;
    ingested: number;
    duplicates: number;
    errors: number;
  };
}

Example:

const result = await hx.captureBatch([
  {
    type: 'message',
    sessionId: 'sess-42',
    payload: { role: 'user', content: 'Deploy to staging' }
  },
  {
    type: 'tool_call',
    sessionId: 'sess-42',
    payload: { tool: 'deploy', args: { env: 'staging' } }
  },
  {
    type: 'tool_result',
    sessionId: 'sess-42',
    payload: { tool: 'deploy', result: { success: true } }
  }
]);

console.log(`Ingested: ${result.summary.ingested}/${result.summary.total}`);

learn(options?)

Trigger the Memory Compiler to learn from accumulated experience.

Signature:

async learn(options?: LearnOptions): Promise<LearnResult>

Parameters:

interface LearnOptions {
  /** "full" recompilation or "incremental" delta (default: "incremental") */
  scope?: "full" | "incremental";
  /** Minimum pattern strength threshold (0-1) */
  minPatternStrength?: number;
  /** Which artifact types to extract */
  artifactTypes?: ArtifactType[];
}

type ArtifactType = "task_schema" | "failure_playbook" | "causal_pattern" | "decision_policy";

Returns:

interface LearnResult {
  runId: string;
  status: "completed" | "partial" | "failed";
  artifacts: {
    created: number;
    updated: number;
    unchanged: number;
    byType: Record<string, number>;
  };
  stats: {
    memoriesProcessed: number;
    patternsFound: number;
    compilationMs: number;
  };
}

Example:

// Incremental compilation (default)
const result = await hx.learn();
console.log(`Created ${result.artifacts.created} new artifacts`);
console.log(`Processed ${result.stats.memoriesProcessed} memories in ${result.stats.compilationMs}ms`);

// Full recompilation with filters
const fullResult = await hx.learn({
  scope: 'full',
  minPatternStrength: 0.7,
  artifactTypes: ['task_schema', 'failure_playbook']
});

synthesize(query, options?)

Synthesize compressed context from all memory layers for a query.

Signature:

async synthesize(query: string, options?: SynthesizeOptions): Promise<SynthesizeResult>

Parameters:

interface SynthesizeOptions {
  /** Token budget for output (default: 4000) */
  maxTokens?: number;
  /** Which reasoning sections to include */
  sections?: ReasoningSection[];
  /** Minimum confidence threshold (default: 0.3) */
  minConfidence?: number;
  /** Attach source references (default: true) */
  includeProvenance?: boolean;
}

type ReasoningSection = "procedures" | "failures" | "decisions" | "facts" | "causal" | "context";

Returns:

interface SynthesizeResult {
  packId: string;
  entries: SynthesisEntry[];
  budget: {
    limit: number;
    used: number;
    compressionRatio: number;
    entriesIncluded: number;
    entriesDropped: number;
  };
}

interface SynthesisEntry {
  section: ReasoningSection;
  content: string;
  confidence: number;
  provenance?: ProvenanceRef[];
}

interface ProvenanceRef {
  sourceType: string;
  sourceId: string;
  artifactType?: string;
  evidenceCount?: number;
}

Example:

const context = await hx.synthesize('deploy payment service to production', {
  maxTokens: 8000,
  sections: ['procedures', 'failures', 'decisions'],
  minConfidence: 0.5,
  includeProvenance: true
});

console.log(`Pack ${context.packId}: ${context.entries.length} entries`);
console.log(`Token budget: ${context.budget.used}/${context.budget.limit}`);

// Use entries in your LLM prompt
for (const entry of context.entries) {
  console.log(`[${entry.section}] (${entry.confidence.toFixed(2)}): ${entry.content}`);
}

listArtifacts(options?)

List compiled knowledge artifacts with filtering and pagination.

Signature:

async listArtifacts(options?: ArtifactListOptions): Promise<ArtifactListResult>

Parameters:

interface ArtifactListOptions {
  type?: ArtifactType;
  status?: ArtifactStatus;          // "active" | "deprecated" | "superseded"
  sort?: ArtifactSortField;         // "createdAt" | "updatedAt" | "confidence" | "evidenceCount"
  order?: "asc" | "desc";
  limit?: number;                   // max results (default: 20)
  cursor?: string;                  // pagination cursor
}

Returns:

interface ArtifactListResult {
  artifacts: Artifact[];
  pagination: {
    hasMore: boolean;
    cursor?: string;
    total: number;
  };
}

interface Artifact {
  id: string;
  type: ArtifactType;
  status: ArtifactStatus;
  title: string;
  content: Record<string, unknown>;
  confidence: number;
  evidenceCount: number;
  createdAt: string;
  updatedAt: string;
}

Example:

const result = await hx.listArtifacts({
  type: 'failure_playbook',
  status: 'active',
  sort: 'confidence',
  order: 'desc',
  limit: 10
});

for (const artifact of result.artifacts) {
  console.log(`${artifact.title} (${artifact.confidence.toFixed(2)})`);
}

// Paginate
if (result.pagination.hasMore) {
  const nextPage = await hx.listArtifacts({
    cursor: result.pagination.cursor
  });
}

getArtifact(id)

Get a single compiled artifact by ID.

Signature:

async getArtifact(id: string): Promise<Artifact>

Example:

const artifact = await hx.getArtifact('art-001');
console.log(artifact.title);
console.log(JSON.stringify(artifact.content, null, 2));

getMetrics(options?)

Get usage and performance metrics.

Signature:

async getMetrics(options?: MetricsOptions): Promise<MetricsResult>

Parameters:

interface MetricsOptions {
  period?: "1h" | "24h" | "7d" | "30d";
  granularity?: "minute" | "hour" | "day";
}

Returns:

interface MetricsResult {
  period: {
    start: string;
    end: string;
    granularity: string;
  };
  usage: {
    events: {
      total: number;
      ingested: number;
      duplicates: number;
      errors: number;
      byType: Record<string, number>;
    };
    compilations: {
      total: number;
      artifactsCreated: number;
      artifactsUpdated: number;
    };
    syntheses: {
      total: number;
      avgTokensUsed: number;
      avgCompressionRatio: number;
    };
  };
  quota: {
    plan: string;
    eventsLimit: number;
    eventsUsed: number;
    eventsRemaining: number;
    resetDate: string;
  };
}

Example:

const metrics = await hx.getMetrics({ period: '24h' });
console.log(`Events today: ${metrics.usage.events.total}`);
console.log(`Quota: ${metrics.quota.eventsUsed}/${metrics.quota.eventsLimit}`);

Error Handling

import { Hippocortex, HippocortexError } from '@hippocortex/sdk';

try {
  await hx.capture(event);
} catch (error) {
  if (error instanceof HippocortexError) {
    switch (error.code) {
      case 'rate_limited':
        // Back off and retry
        await sleep(1000);
        break;
      case 'validation_error':
        // Fix the request
        console.error('Validation:', error.details);
        break;
      case 'unauthorized':
        // Check API key
        break;
      default:
        throw error;
    }
  }
}

TypeScript Types Reference

All types are exported from the package:

import type {
  HippocortexConfig,
  CaptureEvent,
  CaptureEventType,
  CaptureResult,
  BatchCaptureResult,
  LearnOptions,
  LearnResult,
  ArtifactType,
  SynthesizeOptions,
  SynthesizeResult,
  SynthesisEntry,
  ReasoningSection,
  ProvenanceRef,
  ArtifactListOptions,
  ArtifactListResult,
  Artifact,
  ArtifactStatus,
  ArtifactSortField,
  MetricsOptions,
  MetricsResult,
} from '@hippocortex/sdk';

API Response Format

All API responses follow a consistent envelope:

interface ApiResponse<T> {
  ok: boolean;
  data?: T;
  error?: {
    code: string;
    message: string;
    details?: unknown[];
  };
  meta?: {
    requestId: string;
    tenantId: string;
    durationMs: number;
  };
}

The SDK unwraps this envelope automatically. You receive data on success and HippocortexError on failure.