Hippocortex Feature Catalog

A comprehensive reference of every capability in the Hippocortex platform.


0. One-Line Integration (v1.1.0)

Hippocortex offers three integration methods, from zero-effort to full manual control.

Auto-Instrumentation

Sentry-style monkey-patching. One import, zero config. Every OpenAI and Anthropic SDK call automatically gets memory context injection and conversation capture.

// TypeScript
import '@hippocortex/sdk/auto'
# Python
import hippocortex.auto

How it works: On import, the module patches Completions.prototype.create (OpenAI) and Messages.prototype.create (Anthropic). Each call synthesizes relevant context, prepends it as a system message, calls the original method, then captures the conversation. All operations are fault-tolerant: if Hippocortex is unreachable, calls pass through unchanged.

wrap() - Transparent Client Wrapping

Wrap your OpenAI or Anthropic client instance. Explicit, typed, per-client control.

import { wrap } from '@hippocortex/sdk'
const openai = wrap(new OpenAI())
// Use exactly as before. Memory is transparent.
from hippocortex import wrap
client = wrap(OpenAI())
# Use exactly as before. Memory is transparent.

The wrapped client keeps its original type signature. Works with OpenAI and Anthropic SDKs.

Zero-Config

Both auto and wrap() resolve configuration automatically:

  1. Explicit arguments
  2. Environment variables: HIPPOCORTEX_API_KEY, HIPPOCORTEX_BASE_URL
  3. .hippocortex.json file (searched from cwd upward to filesystem root)

1. Event Capture

Hippocortex captures agent interactions as structured events for downstream compilation and retrieval.

Supported Event Types

#TypeDescriptionExample Payload Fields
1messageConversation turns (user/assistant/system)role, content
2tool_callTool invocations with parameterstool, args, callId
3tool_resultTool outputs and return valuestool, result, callId, success
4file_editFile modificationspath, before, after, diff
5test_runTest suite executionsuite, passed, failed, duration
6command_execShell command executioncommand, exitCode, stdout, stderr
7browser_actionBrowser automation actionsaction, url, selector, result
8api_resultExternal API call resultsendpoint, method, status, body
9decisionAgent reasoning and choice pointsoptions, chosen, reasoning
10errorErrors and exceptionstype, message, stack, context
11feedbackHuman feedback signalsverdict, comment, rating
12observationEnvironmental observationssource, content, significance
13outcomeTask completion and resultstask, success, duration, metrics

Batch Support

Capture up to 100 events in a single API call using POST /v1/capture/batch. Each event in the batch is processed independently, with per-event status reporting.

Deduplication

Events are deduplicated using two mechanisms:

  • Idempotency keys: Client-provided unique identifiers prevent reprocessing on retries
  • Event content hashing: SHA-256 hash of event type + sessionId + payload detects duplicate content

Salience Scoring

Every ingested event receives a salience score (0.0 to 1.0) computed at capture time. Salience indicates how significant the event is for downstream compilation and retrieval. High-salience events (errors, outcomes, decisions) are prioritized during memory compilation.

Namespace Assignment

In enterprise deployments, events are assigned to memory namespaces based on organization and team context. This enables data isolation and sensitivity classification.


2. Memory Compilation

The Memory Compiler transforms raw events into structured knowledge artifacts. It operates without any LLM calls, ensuring deterministic and hallucination-free knowledge extraction.

Compilation Process

Raw Events
    |
    v
Pattern Extraction (frequency, co-occurrence, sequence analysis)
    |
    v
Artifact Generation (typed knowledge structures)
    |
    v
Confidence Scoring (method strength x evidence count x recency)
    |
    v
Contradiction Detection (supersedes outdated knowledge)
    |
    v
Knowledge Artifacts (stored in PostgreSQL)

Key Properties

PropertyDescription
Zero-LLMNo language model calls. Purely algorithmic pattern extraction.
DeterministicSame inputs always produce the same artifacts.
IncrementalOnly processes events since the last compilation run.
AuditableEvery artifact traces back to source events.
Contradiction-awareDetects and supersedes outdated knowledge.

Compilation Modes

  • Incremental: Processes only new events since the last run (default, faster)
  • Full: Reprocesses all events from scratch (thorough, slower)

3. Knowledge Artifacts

The compiler produces five types of knowledge artifacts:

Task Schema

Learned procedures and step sequences extracted from successful task completions.

{
  "type": "task_schema",
  "title": "Deploy API to Staging",
  "content": {
    "steps": [
      "Pull latest from main branch",
      "Run test suite",
      "Build Docker image",
      "Push to registry",
      "Update Kubernetes deployment"
    ],
    "preconditions": ["Tests passing", "Docker daemon running"],
    "postconditions": ["Health check passing", "Logs showing startup"]
  },
  "confidence": 0.87,
  "evidenceCount": 12
}

Failure Playbook

Known failure modes with causes, symptoms, and recovery steps.

{
  "type": "failure_playbook",
  "title": "Database Connection Pool Exhaustion",
  "content": {
    "symptoms": ["Connection timeout errors", "Increasing latency"],
    "rootCauses": ["Missing connection release", "Excessive concurrent queries"],
    "recoverySteps": [
      "Check active connections: SELECT count(*) FROM pg_stat_activity",
      "Terminate idle connections",
      "Increase pool size if under capacity"
    ],
    "prevention": "Use connection pooler (PGBouncer), set statement_timeout"
  },
  "confidence": 0.92,
  "evidenceCount": 8
}

Decision Policy

Conditional rules extracted from agent decision patterns.

{
  "type": "decision_policy",
  "title": "Retry vs. Escalate Policy",
  "content": {
    "condition": "API call fails with 5xx status",
    "actions": {
      "retry": "If attempt_count < 3 and error is transient",
      "escalate": "If attempt_count >= 3 or error is persistent"
    },
    "evidence": "Observed in 15 sessions across 3 agents"
  },
  "confidence": 0.78,
  "evidenceCount": 15
}

Causal Pattern

Cause-and-effect relationships between events.

{
  "type": "causal_pattern",
  "title": "Memory Leak from Unclosed Streams",
  "content": {
    "cause": "File streams opened without .close() or using() block",
    "effect": "Memory usage grows linearly until OOM crash",
    "frequency": "Observed 6 times in 30-day window",
    "mitigation": "Always use try/finally or using() for stream lifecycle"
  },
  "confidence": 0.85,
  "evidenceCount": 6
}

Strategy Template

High-level approaches for recurring problem categories.

{
  "type": "strategy_template",
  "title": "Debugging Intermittent Test Failures",
  "content": {
    "approach": "Systematic isolation",
    "steps": [
      "Run failing test in isolation (rule out ordering)",
      "Check for shared mutable state",
      "Add timing instrumentation",
      "Review recent changes to shared fixtures"
    ],
    "applicability": "Any non-deterministic test failure"
  },
  "confidence": 0.74,
  "evidenceCount": 9
}

4. Context Synthesis

Retrieves compressed, relevant context for an agent's current query.

Performance

MetricValue
p50 latency18ms
p99 latency85ms
Max context sections6
Default token budget4,000
Max token budget32,000

Token Budget Management

The synthesis engine allocates tokens across reasoning sections:

SectionPriorityDescription
proceduresHighestRelevant task schemas and step sequences
failuresHighFailure playbooks matching the query
decisionsMediumDecision policies and rules
factsMediumKnown facts and entity information
causalLowerCausal patterns and relationships
contextLowestGeneral context and background

Budget allocation uses priority-weighted distribution with dynamic reallocation: sections that need fewer tokens than allocated give surplus to higher-priority sections that have more content.

Ranking Model

Context items are ranked using an 8-signal composite model:

SignalWeightDescription
Salience0.20Stored confidence/importance score
Recency0.15Time decay (newer items score higher)
Keyword Overlap0.15Query terms found in memory content
Entity Overlap0.20Named entities shared between query and memory
Graph Connectivity0.10Knowledge graph connections to query entities
Relation Strength0.05Direct graph relations to query subject
Contradiction Status0.10Active (1.0) vs deprecated (0.0)
Promotion Confidence0.05Original confidence assessment score

Provenance Tracking

Every synthesis entry includes provenance references tracing back to source events:

{
  "section": "procedures",
  "content": "To deploy the API service...",
  "confidence": 0.87,
  "provenance": [
    {
      "sourceType": "artifact",
      "sourceId": "art-001",
      "artifactType": "task_schema",
      "evidenceCount": 12
    }
  ]
}

5. HMX Protocol

The Hippocortex Memory Exchange (HMX) Protocol is an open standard for agent memory interoperability. It defines five specifications:

5.1 Event Schema Spec

Standardized format for agent interaction events across frameworks. Ensures events from OpenAI, LangGraph, CrewAI, AutoGen, and other frameworks can be captured uniformly.

5.2 Artifact Schema Spec

Defines the structure, lifecycle states, and metadata schema for knowledge artifacts. Includes versioning, confidence scoring, and deprecation semantics.

5.3 Context Pack Spec

Format for compressed context delivery. Defines section types, budget reporting, compression ratios, and provenance attachment.

5.4 Memory Fingerprint Spec

Portable compressed representation of an agent's entire memory state, enabling snapshots and transfers.

5.5 Transfer Protocol Spec

Protocol for cross-agent knowledge portability. Defines how memory fingerprints are created, validated, and applied to new agents.


6. Memory Fingerprints

Compressed representations of an agent's memory state for portability and backup.

Compression Tiers

TierCompression RatioContents
Full~1:1All artifacts, memories, and metadata
Standard~5:1Active artifacts and promoted memories
Compact~20:1Top-confidence artifacts and key decision points

Use Cases

  • Agent cloning: Create a new agent with the same knowledge as an existing one
  • Backup and restore: Snapshot memory state before major changes
  • Knowledge audit: Inspect what an agent knows at a point in time

7. Cross-Agent Transfer

Enables knowledge portability between agents, even across different frameworks.

How It Works

  1. Source agent exports a memory fingerprint
  2. Fingerprint is validated for schema compatibility
  3. Target agent imports the fingerprint
  4. Imported artifacts are marked with transfer provenance
  5. Confidence scores are adjusted based on the transfer context

Constraints

  • Transfer respects namespace sensitivity levels
  • Vault-encrypted secrets are never included in transfers
  • Transferred knowledge is marked with lineage to the source agent

8. Adaptive Compiler

The adaptive compiler self-tunes compilation parameters based on telemetry feedback.

What It Adapts

The system adjusts 19 parameters across five categories:

CategoryParametersWhat Changes
Confidence Weightsmethod, evidence, recency weightsHow confidence scores are calculated
Promotion Thresholdsmin confidence, min evidence, contradiction thresholdWhen memories get promoted to artifacts
Ranking Weightsrelevance, recency, provenance, layer priority, diversityHow retrieval results are ordered
Method Strengths5 extraction methodsWeight of different evidence sources
Compiler Thresholdsmin confidence, min sources, decayCompilation sensitivity

Safety Guarantees

  • Every parameter has hard floor and ceiling bounds
  • Maximum change per adaptation cycle is capped
  • All changes are auditable with before/after values
  • Adaptation can be frozen at tenant or parameter level
  • Full rollback to baseline at any time
  • No ML, no black boxes: every adjustment uses deterministic formulas

Adaptation Cycle

  1. Aggregate telemetry from recent retrieval and outcome events
  2. Compute health signals (helpful vs harmful ratio, success rate)
  3. Calculate parameter deltas based on signals
  4. Cap deltas to safe bounds
  5. Apply changes and record audit trail
  6. Repeat on configurable schedule

9. Predictive Context

Pre-warms context packs based on predicted agent needs, reducing retrieval latency for anticipated queries.

Capabilities

  • Pattern detection: Identifies recurring query sequences (e.g., "lookup" followed by "recall" followed by "verify")
  • Artifact clustering: Detects which artifacts are frequently retrieved together
  • Workflow repetition detection: Recognizes repeated query patterns across sessions
  • Prefetch engine: Pre-assembles context packs for predicted queries and caches them
  • Confidence scoring: Only pre-warms when prediction confidence exceeds threshold

Cache Management

  • Configurable TTL for prefetched packs
  • Automatic eviction of stale predictions
  • Hit/miss tracking for accuracy measurement
  • Graceful fallback to standard retrieval on cache miss

10. Encrypted Vault

Secure storage for sensitive data with AES-256-GCM envelope encryption.

Architecture

Secret Value
    |
    v
Encrypt with Data Encryption Key (DEK) -- AES-256-GCM
    |
    v
Encrypted Blob + IV + Auth Tag
    |
DEK encrypted with Key Encryption Key (KEK) -- envelope encryption
    |
    v
Stored in PostgreSQL (encrypted DEK + encrypted blob)

Features

FeatureDescription
Envelope encryptionTwo-layer encryption (DEK + KEK)
AES-256-GCMAuthenticated encryption with associated data
Permission-gated revealSecrets only revealed to authorized roles
Audit trailEvery reveal is logged with actor, timestamp, IP
Version historyFull version history for every vault item
Secret referencesvault:// URI scheme for referencing without revealing
Sensitivity levelslow, medium, high, critical classification
Auto-archivalItems can be archived without deletion

Vault API

  • Create vaults with sensitivity classification
  • Create, update, and archive vault items
  • Reveal items (permission-gated, audited)
  • List versions for audit
  • Query audit logs per item
  • Manage permissions per vault

11. Secret Detection

Automatically intercepts sensitive data in captured events before it enters the memory system.

Capabilities

  • Detects secrets in event payloads during capture
  • Classifies detected secrets by type and confidence level
  • High-confidence secrets are automatically redirected to the vault
  • Uncertain detections are queued for human review
  • Secret references (vault://) replace raw values in stored events

Supported Secret Types

The detector identifies common secret patterns including:

  • API keys (platform-specific patterns for AWS, GCP, GitHub, Stripe, etc.)
  • Connection strings (database, Redis, AMQP)
  • Tokens (JWT, OAuth, bearer tokens)
  • Credentials (passwords, private keys)
  • Personally identifiable information (emails, phone numbers, SSNs)

Review Workflow

Uncertain detections (confidence below threshold) are queued for human review via the dashboard:

  1. Reviewer sees the detected value (masked) and context
  2. Reviewer confirms or rejects the detection
  3. Confirmed secrets are moved to vault
  4. Rejected detections are released to normal memory

12. Enterprise RBAC

Role-based access control with organization and team scoping.

Organization Roles

RoleDescriptionPermissions
ownerFull control, billing, delete organizationAll 28 permissions
adminManage members, teams, agents, policies24 permissions
managerManage team members and agents16 permissions
developerCreate agents, read/write memories12 permissions
analystRead-only access to memories and artifacts8 permissions
viewerRead-only access to dashboards4 permissions

Team Roles

RoleDescription
team_leadFull team management, member and agent control
team_memberStandard team operations
team_viewerRead-only team access
team_agentAgent identity (non-human participant)

Permission Categories

CategoryCountExamples
Organization6org:read, org:update, org:delete, etc.
Members4member:invite, member:remove, etc.
Teams4team:create, team:update, team:delete, etc.
Agents4agent:create, agent:read, agent:update, etc.
Namespaces3namespace:create, namespace:read, etc.
Policies3policy:create, policy:read, policy:update
Audit2audit:read, audit:export
Vault2vault:read, vault:manage

13. Memory Namespaces

Scoped collections for organizing and isolating agent memories.

Sensitivity Classification

LevelDescriptionAccess
publicNon-sensitive, shareable across teamsAll organization members
internalStandard business dataTeam members and above
confidentialSensitive business dataManagers and above
restrictedHighly sensitive (PII, financial, legal)Owners and admins only

Features

  • Namespaces belong to organizations
  • Events are assigned to namespaces at capture time
  • Retrieval is scoped to namespaces the requester can access
  • Policy engine evaluates namespace-level access rules
  • Cross-namespace retrieval requires explicit policy grants

14. Policy Engine

Fine-grained access control using allow/deny rules with priority evaluation.

Policy Structure

{
  "name": "Allow engineering team to read all namespaces",
  "effect": "allow",
  "resource": "namespace:*",
  "action": "read",
  "conditions": {
    "team": "engineering"
  },
  "priority": 100
}

Evaluation Rules

  1. Policies are evaluated in priority order (highest first)
  2. First matching policy wins (explicit allow or deny)
  3. If no policy matches, default deny applies
  4. Deny policies always override allow policies at the same priority

Resource Types

  • namespace:* or namespace:{id} for namespace-level policies
  • agent:* or agent:{id} for agent-level policies
  • vault:* or vault:{id} for vault-level policies

15. Audit Logging

Comprehensive audit trail for all system operations.

Logged Events

CategoryWhat Is Logged
MutationsEvent captures, artifact changes, namespace changes
AccessMemory reads, synthesis requests, artifact retrieval
VaultSecret creation, reveals, permission changes
AuthenticationLogin attempts, API key usage, token refresh
AdministrationMember invites, role changes, policy updates

Audit Log Fields

FieldDescription
idUnique audit entry ID
organizationIdOrganization context
actorIdWho performed the action
actorTypeuser, agent, or system
actionWhat was done
resourceTypeWhat type of resource was affected
resourceIdWhich specific resource
metadataAdditional context (IP, user agent)
timestampWhen it happened (ISO 8601)

Query Capabilities

  • Filter by actor, action, resource, time range
  • Paginated results with cursor-based navigation
  • Export capability for compliance reporting

16. Memory Lineage

Full provenance tracking showing how knowledge was derived from source events.

Lineage Graph

Source Event (capture)
    |
    v
Episodic Memory (raw storage)
    |
    v
Semantic Memory (pattern extraction)
    |
    v
Knowledge Artifact (compilation)
    |
    v
Context Pack Entry (synthesis)

Features

  • Trace any artifact back to its source events
  • View the full lineage graph for any memory
  • Understand which events contributed to a conclusion
  • Audit compliance: prove what data informed a decision

17. Lifecycle Policies

Manage the retention, archival, and deletion of knowledge artifacts.

Lifecycle States

active --> warm --> cold --> archived
StateRetrievalSynthesisDescription
activeYesYesFully available, included in ranking
warmYesDeprioritizedAvailable but lower priority
coldOn requestNoExcluded from normal retrieval
archivedNoNoAudit-only, no retrieval

Policy Configuration

ParameterDescriptionDefault
Retention periodHow long to keep active90 days
Archive afterMove to archived after180 days
Delete afterPermanent deletion after365 days
FreezePrevent state transitionsOff

Decay Engine

Artifacts transition through lifecycle states based on a decay score computed from:

  • Time since last usage (inactivity)
  • Usefulness telemetry signals
  • Usage frequency
  • Retrieval count

Well-used artifacts resist decay. Rarely-used artifacts gradually move toward archival.


18. Behavioral Intelligence

Analyzes tool usage patterns and decision-making behavior across agent sessions.

Capabilities

  • Tool usage analysis: Which tools are used most, in what sequences, with what success rates
  • Decision pattern extraction: Common decision points and the choices agents make
  • Session flow analysis: How agents approach different types of tasks
  • Anomaly detection: Unusual patterns that may indicate problems
  • Effectiveness scoring: Which approaches lead to successful outcomes

Applications

  • Identify underperforming agents based on behavioral patterns
  • Optimize tool configurations based on usage data
  • Detect workflow inefficiencies
  • Train new agents using behavioral templates from top performers

Feature Availability by Plan

FeatureFreeDeveloperProEnterprise
Event CaptureYesYesYesYes
Memory CompilationYesYesYesYes
Context SynthesisYesYesYesYes
Knowledge ArtifactsYesYesYesYes
Vault (Encrypted)--YesYesYes
Secret Detection--YesYesYes
RBAC----YesYes
Memory Namespaces----YesYes
Policy Engine----YesYes
Audit Logging----YesYes
Memory Lineage----YesYes
Lifecycle Policies----YesYes
Adaptive Compiler------Yes
Predictive Context------Yes
Cross-Agent Transfer------Yes
Memory Fingerprints------Yes
Behavioral Intelligence------Yes
Custom Deployment------Yes
SLA------Yes