Hippocortex Feature Catalog
A comprehensive reference of every capability in the Hippocortex platform.
0. One-Line Integration (v1.1.0)
Hippocortex offers three integration methods, from zero-effort to full manual control.
Auto-Instrumentation
Sentry-style monkey-patching. One import, zero config. Every OpenAI and Anthropic SDK call automatically gets memory context injection and conversation capture.
// TypeScript
import '@hippocortex/sdk/auto'
# Python
import hippocortex.auto
How it works: On import, the module patches Completions.prototype.create (OpenAI) and
Messages.prototype.create (Anthropic). Each call synthesizes relevant context, prepends it
as a system message, calls the original method, then captures the conversation. All operations
are fault-tolerant: if Hippocortex is unreachable, calls pass through unchanged.
wrap() - Transparent Client Wrapping
Wrap your OpenAI or Anthropic client instance. Explicit, typed, per-client control.
import { wrap } from '@hippocortex/sdk'
const openai = wrap(new OpenAI())
// Use exactly as before. Memory is transparent.
from hippocortex import wrap
client = wrap(OpenAI())
# Use exactly as before. Memory is transparent.
The wrapped client keeps its original type signature. Works with OpenAI and Anthropic SDKs.
Zero-Config
Both auto and wrap() resolve configuration automatically:
- Explicit arguments
- Environment variables:
HIPPOCORTEX_API_KEY,HIPPOCORTEX_BASE_URL .hippocortex.jsonfile (searched from cwd upward to filesystem root)
1. Event Capture
Hippocortex captures agent interactions as structured events for downstream compilation and retrieval.
Supported Event Types
| # | Type | Description | Example Payload Fields |
|---|---|---|---|
| 1 | message | Conversation turns (user/assistant/system) | role, content |
| 2 | tool_call | Tool invocations with parameters | tool, args, callId |
| 3 | tool_result | Tool outputs and return values | tool, result, callId, success |
| 4 | file_edit | File modifications | path, before, after, diff |
| 5 | test_run | Test suite execution | suite, passed, failed, duration |
| 6 | command_exec | Shell command execution | command, exitCode, stdout, stderr |
| 7 | browser_action | Browser automation actions | action, url, selector, result |
| 8 | api_result | External API call results | endpoint, method, status, body |
| 9 | decision | Agent reasoning and choice points | options, chosen, reasoning |
| 10 | error | Errors and exceptions | type, message, stack, context |
| 11 | feedback | Human feedback signals | verdict, comment, rating |
| 12 | observation | Environmental observations | source, content, significance |
| 13 | outcome | Task completion and results | task, success, duration, metrics |
Batch Support
Capture up to 100 events in a single API call using POST /v1/capture/batch. Each event in the batch is processed independently, with per-event status reporting.
Deduplication
Events are deduplicated using two mechanisms:
- Idempotency keys: Client-provided unique identifiers prevent reprocessing on retries
- Event content hashing: SHA-256 hash of event type + sessionId + payload detects duplicate content
Salience Scoring
Every ingested event receives a salience score (0.0 to 1.0) computed at capture time. Salience indicates how significant the event is for downstream compilation and retrieval. High-salience events (errors, outcomes, decisions) are prioritized during memory compilation.
Namespace Assignment
In enterprise deployments, events are assigned to memory namespaces based on organization and team context. This enables data isolation and sensitivity classification.
2. Memory Compilation
The Memory Compiler transforms raw events into structured knowledge artifacts. It operates without any LLM calls, ensuring deterministic and hallucination-free knowledge extraction.
Compilation Process
Raw Events
|
v
Pattern Extraction (frequency, co-occurrence, sequence analysis)
|
v
Artifact Generation (typed knowledge structures)
|
v
Confidence Scoring (method strength x evidence count x recency)
|
v
Contradiction Detection (supersedes outdated knowledge)
|
v
Knowledge Artifacts (stored in PostgreSQL)
Key Properties
| Property | Description |
|---|---|
| Zero-LLM | No language model calls. Purely algorithmic pattern extraction. |
| Deterministic | Same inputs always produce the same artifacts. |
| Incremental | Only processes events since the last compilation run. |
| Auditable | Every artifact traces back to source events. |
| Contradiction-aware | Detects and supersedes outdated knowledge. |
Compilation Modes
- Incremental: Processes only new events since the last run (default, faster)
- Full: Reprocesses all events from scratch (thorough, slower)
3. Knowledge Artifacts
The compiler produces five types of knowledge artifacts:
Task Schema
Learned procedures and step sequences extracted from successful task completions.
{
"type": "task_schema",
"title": "Deploy API to Staging",
"content": {
"steps": [
"Pull latest from main branch",
"Run test suite",
"Build Docker image",
"Push to registry",
"Update Kubernetes deployment"
],
"preconditions": ["Tests passing", "Docker daemon running"],
"postconditions": ["Health check passing", "Logs showing startup"]
},
"confidence": 0.87,
"evidenceCount": 12
}
Failure Playbook
Known failure modes with causes, symptoms, and recovery steps.
{
"type": "failure_playbook",
"title": "Database Connection Pool Exhaustion",
"content": {
"symptoms": ["Connection timeout errors", "Increasing latency"],
"rootCauses": ["Missing connection release", "Excessive concurrent queries"],
"recoverySteps": [
"Check active connections: SELECT count(*) FROM pg_stat_activity",
"Terminate idle connections",
"Increase pool size if under capacity"
],
"prevention": "Use connection pooler (PGBouncer), set statement_timeout"
},
"confidence": 0.92,
"evidenceCount": 8
}
Decision Policy
Conditional rules extracted from agent decision patterns.
{
"type": "decision_policy",
"title": "Retry vs. Escalate Policy",
"content": {
"condition": "API call fails with 5xx status",
"actions": {
"retry": "If attempt_count < 3 and error is transient",
"escalate": "If attempt_count >= 3 or error is persistent"
},
"evidence": "Observed in 15 sessions across 3 agents"
},
"confidence": 0.78,
"evidenceCount": 15
}
Causal Pattern
Cause-and-effect relationships between events.
{
"type": "causal_pattern",
"title": "Memory Leak from Unclosed Streams",
"content": {
"cause": "File streams opened without .close() or using() block",
"effect": "Memory usage grows linearly until OOM crash",
"frequency": "Observed 6 times in 30-day window",
"mitigation": "Always use try/finally or using() for stream lifecycle"
},
"confidence": 0.85,
"evidenceCount": 6
}
Strategy Template
High-level approaches for recurring problem categories.
{
"type": "strategy_template",
"title": "Debugging Intermittent Test Failures",
"content": {
"approach": "Systematic isolation",
"steps": [
"Run failing test in isolation (rule out ordering)",
"Check for shared mutable state",
"Add timing instrumentation",
"Review recent changes to shared fixtures"
],
"applicability": "Any non-deterministic test failure"
},
"confidence": 0.74,
"evidenceCount": 9
}
4. Context Synthesis
Retrieves compressed, relevant context for an agent's current query.
Performance
| Metric | Value |
|---|---|
| p50 latency | 18ms |
| p99 latency | 85ms |
| Max context sections | 6 |
| Default token budget | 4,000 |
| Max token budget | 32,000 |
Token Budget Management
The synthesis engine allocates tokens across reasoning sections:
| Section | Priority | Description |
|---|---|---|
procedures | Highest | Relevant task schemas and step sequences |
failures | High | Failure playbooks matching the query |
decisions | Medium | Decision policies and rules |
facts | Medium | Known facts and entity information |
causal | Lower | Causal patterns and relationships |
context | Lowest | General context and background |
Budget allocation uses priority-weighted distribution with dynamic reallocation: sections that need fewer tokens than allocated give surplus to higher-priority sections that have more content.
Ranking Model
Context items are ranked using an 8-signal composite model:
| Signal | Weight | Description |
|---|---|---|
| Salience | 0.20 | Stored confidence/importance score |
| Recency | 0.15 | Time decay (newer items score higher) |
| Keyword Overlap | 0.15 | Query terms found in memory content |
| Entity Overlap | 0.20 | Named entities shared between query and memory |
| Graph Connectivity | 0.10 | Knowledge graph connections to query entities |
| Relation Strength | 0.05 | Direct graph relations to query subject |
| Contradiction Status | 0.10 | Active (1.0) vs deprecated (0.0) |
| Promotion Confidence | 0.05 | Original confidence assessment score |
Provenance Tracking
Every synthesis entry includes provenance references tracing back to source events:
{
"section": "procedures",
"content": "To deploy the API service...",
"confidence": 0.87,
"provenance": [
{
"sourceType": "artifact",
"sourceId": "art-001",
"artifactType": "task_schema",
"evidenceCount": 12
}
]
}
5. HMX Protocol
The Hippocortex Memory Exchange (HMX) Protocol is an open standard for agent memory interoperability. It defines five specifications:
5.1 Event Schema Spec
Standardized format for agent interaction events across frameworks. Ensures events from OpenAI, LangGraph, CrewAI, AutoGen, and other frameworks can be captured uniformly.
5.2 Artifact Schema Spec
Defines the structure, lifecycle states, and metadata schema for knowledge artifacts. Includes versioning, confidence scoring, and deprecation semantics.
5.3 Context Pack Spec
Format for compressed context delivery. Defines section types, budget reporting, compression ratios, and provenance attachment.
5.4 Memory Fingerprint Spec
Portable compressed representation of an agent's entire memory state, enabling snapshots and transfers.
5.5 Transfer Protocol Spec
Protocol for cross-agent knowledge portability. Defines how memory fingerprints are created, validated, and applied to new agents.
6. Memory Fingerprints
Compressed representations of an agent's memory state for portability and backup.
Compression Tiers
| Tier | Compression Ratio | Contents |
|---|---|---|
| Full | ~1:1 | All artifacts, memories, and metadata |
| Standard | ~5:1 | Active artifacts and promoted memories |
| Compact | ~20:1 | Top-confidence artifacts and key decision points |
Use Cases
- Agent cloning: Create a new agent with the same knowledge as an existing one
- Backup and restore: Snapshot memory state before major changes
- Knowledge audit: Inspect what an agent knows at a point in time
7. Cross-Agent Transfer
Enables knowledge portability between agents, even across different frameworks.
How It Works
- Source agent exports a memory fingerprint
- Fingerprint is validated for schema compatibility
- Target agent imports the fingerprint
- Imported artifacts are marked with transfer provenance
- Confidence scores are adjusted based on the transfer context
Constraints
- Transfer respects namespace sensitivity levels
- Vault-encrypted secrets are never included in transfers
- Transferred knowledge is marked with lineage to the source agent
8. Adaptive Compiler
The adaptive compiler self-tunes compilation parameters based on telemetry feedback.
What It Adapts
The system adjusts 19 parameters across five categories:
| Category | Parameters | What Changes |
|---|---|---|
| Confidence Weights | method, evidence, recency weights | How confidence scores are calculated |
| Promotion Thresholds | min confidence, min evidence, contradiction threshold | When memories get promoted to artifacts |
| Ranking Weights | relevance, recency, provenance, layer priority, diversity | How retrieval results are ordered |
| Method Strengths | 5 extraction methods | Weight of different evidence sources |
| Compiler Thresholds | min confidence, min sources, decay | Compilation sensitivity |
Safety Guarantees
- Every parameter has hard floor and ceiling bounds
- Maximum change per adaptation cycle is capped
- All changes are auditable with before/after values
- Adaptation can be frozen at tenant or parameter level
- Full rollback to baseline at any time
- No ML, no black boxes: every adjustment uses deterministic formulas
Adaptation Cycle
- Aggregate telemetry from recent retrieval and outcome events
- Compute health signals (helpful vs harmful ratio, success rate)
- Calculate parameter deltas based on signals
- Cap deltas to safe bounds
- Apply changes and record audit trail
- Repeat on configurable schedule
9. Predictive Context
Pre-warms context packs based on predicted agent needs, reducing retrieval latency for anticipated queries.
Capabilities
- Pattern detection: Identifies recurring query sequences (e.g., "lookup" followed by "recall" followed by "verify")
- Artifact clustering: Detects which artifacts are frequently retrieved together
- Workflow repetition detection: Recognizes repeated query patterns across sessions
- Prefetch engine: Pre-assembles context packs for predicted queries and caches them
- Confidence scoring: Only pre-warms when prediction confidence exceeds threshold
Cache Management
- Configurable TTL for prefetched packs
- Automatic eviction of stale predictions
- Hit/miss tracking for accuracy measurement
- Graceful fallback to standard retrieval on cache miss
10. Encrypted Vault
Secure storage for sensitive data with AES-256-GCM envelope encryption.
Architecture
Secret Value
|
v
Encrypt with Data Encryption Key (DEK) -- AES-256-GCM
|
v
Encrypted Blob + IV + Auth Tag
|
DEK encrypted with Key Encryption Key (KEK) -- envelope encryption
|
v
Stored in PostgreSQL (encrypted DEK + encrypted blob)
Features
| Feature | Description |
|---|---|
| Envelope encryption | Two-layer encryption (DEK + KEK) |
| AES-256-GCM | Authenticated encryption with associated data |
| Permission-gated reveal | Secrets only revealed to authorized roles |
| Audit trail | Every reveal is logged with actor, timestamp, IP |
| Version history | Full version history for every vault item |
| Secret references | vault:// URI scheme for referencing without revealing |
| Sensitivity levels | low, medium, high, critical classification |
| Auto-archival | Items can be archived without deletion |
Vault API
- Create vaults with sensitivity classification
- Create, update, and archive vault items
- Reveal items (permission-gated, audited)
- List versions for audit
- Query audit logs per item
- Manage permissions per vault
11. Secret Detection
Automatically intercepts sensitive data in captured events before it enters the memory system.
Capabilities
- Detects secrets in event payloads during capture
- Classifies detected secrets by type and confidence level
- High-confidence secrets are automatically redirected to the vault
- Uncertain detections are queued for human review
- Secret references (
vault://) replace raw values in stored events
Supported Secret Types
The detector identifies common secret patterns including:
- API keys (platform-specific patterns for AWS, GCP, GitHub, Stripe, etc.)
- Connection strings (database, Redis, AMQP)
- Tokens (JWT, OAuth, bearer tokens)
- Credentials (passwords, private keys)
- Personally identifiable information (emails, phone numbers, SSNs)
Review Workflow
Uncertain detections (confidence below threshold) are queued for human review via the dashboard:
- Reviewer sees the detected value (masked) and context
- Reviewer confirms or rejects the detection
- Confirmed secrets are moved to vault
- Rejected detections are released to normal memory
12. Enterprise RBAC
Role-based access control with organization and team scoping.
Organization Roles
| Role | Description | Permissions |
|---|---|---|
owner | Full control, billing, delete organization | All 28 permissions |
admin | Manage members, teams, agents, policies | 24 permissions |
manager | Manage team members and agents | 16 permissions |
developer | Create agents, read/write memories | 12 permissions |
analyst | Read-only access to memories and artifacts | 8 permissions |
viewer | Read-only access to dashboards | 4 permissions |
Team Roles
| Role | Description |
|---|---|
team_lead | Full team management, member and agent control |
team_member | Standard team operations |
team_viewer | Read-only team access |
team_agent | Agent identity (non-human participant) |
Permission Categories
| Category | Count | Examples |
|---|---|---|
| Organization | 6 | org:read, org:update, org:delete, etc. |
| Members | 4 | member:invite, member:remove, etc. |
| Teams | 4 | team:create, team:update, team:delete, etc. |
| Agents | 4 | agent:create, agent:read, agent:update, etc. |
| Namespaces | 3 | namespace:create, namespace:read, etc. |
| Policies | 3 | policy:create, policy:read, policy:update |
| Audit | 2 | audit:read, audit:export |
| Vault | 2 | vault:read, vault:manage |
13. Memory Namespaces
Scoped collections for organizing and isolating agent memories.
Sensitivity Classification
| Level | Description | Access |
|---|---|---|
public | Non-sensitive, shareable across teams | All organization members |
internal | Standard business data | Team members and above |
confidential | Sensitive business data | Managers and above |
restricted | Highly sensitive (PII, financial, legal) | Owners and admins only |
Features
- Namespaces belong to organizations
- Events are assigned to namespaces at capture time
- Retrieval is scoped to namespaces the requester can access
- Policy engine evaluates namespace-level access rules
- Cross-namespace retrieval requires explicit policy grants
14. Policy Engine
Fine-grained access control using allow/deny rules with priority evaluation.
Policy Structure
{
"name": "Allow engineering team to read all namespaces",
"effect": "allow",
"resource": "namespace:*",
"action": "read",
"conditions": {
"team": "engineering"
},
"priority": 100
}
Evaluation Rules
- Policies are evaluated in priority order (highest first)
- First matching policy wins (explicit allow or deny)
- If no policy matches, default deny applies
- Deny policies always override allow policies at the same priority
Resource Types
namespace:*ornamespace:{id}for namespace-level policiesagent:*oragent:{id}for agent-level policiesvault:*orvault:{id}for vault-level policies
15. Audit Logging
Comprehensive audit trail for all system operations.
Logged Events
| Category | What Is Logged |
|---|---|
| Mutations | Event captures, artifact changes, namespace changes |
| Access | Memory reads, synthesis requests, artifact retrieval |
| Vault | Secret creation, reveals, permission changes |
| Authentication | Login attempts, API key usage, token refresh |
| Administration | Member invites, role changes, policy updates |
Audit Log Fields
| Field | Description |
|---|---|
id | Unique audit entry ID |
organizationId | Organization context |
actorId | Who performed the action |
actorType | user, agent, or system |
action | What was done |
resourceType | What type of resource was affected |
resourceId | Which specific resource |
metadata | Additional context (IP, user agent) |
timestamp | When it happened (ISO 8601) |
Query Capabilities
- Filter by actor, action, resource, time range
- Paginated results with cursor-based navigation
- Export capability for compliance reporting
16. Memory Lineage
Full provenance tracking showing how knowledge was derived from source events.
Lineage Graph
Source Event (capture)
|
v
Episodic Memory (raw storage)
|
v
Semantic Memory (pattern extraction)
|
v
Knowledge Artifact (compilation)
|
v
Context Pack Entry (synthesis)
Features
- Trace any artifact back to its source events
- View the full lineage graph for any memory
- Understand which events contributed to a conclusion
- Audit compliance: prove what data informed a decision
17. Lifecycle Policies
Manage the retention, archival, and deletion of knowledge artifacts.
Lifecycle States
active --> warm --> cold --> archived
| State | Retrieval | Synthesis | Description |
|---|---|---|---|
active | Yes | Yes | Fully available, included in ranking |
warm | Yes | Deprioritized | Available but lower priority |
cold | On request | No | Excluded from normal retrieval |
archived | No | No | Audit-only, no retrieval |
Policy Configuration
| Parameter | Description | Default |
|---|---|---|
| Retention period | How long to keep active | 90 days |
| Archive after | Move to archived after | 180 days |
| Delete after | Permanent deletion after | 365 days |
| Freeze | Prevent state transitions | Off |
Decay Engine
Artifacts transition through lifecycle states based on a decay score computed from:
- Time since last usage (inactivity)
- Usefulness telemetry signals
- Usage frequency
- Retrieval count
Well-used artifacts resist decay. Rarely-used artifacts gradually move toward archival.
18. Behavioral Intelligence
Analyzes tool usage patterns and decision-making behavior across agent sessions.
Capabilities
- Tool usage analysis: Which tools are used most, in what sequences, with what success rates
- Decision pattern extraction: Common decision points and the choices agents make
- Session flow analysis: How agents approach different types of tasks
- Anomaly detection: Unusual patterns that may indicate problems
- Effectiveness scoring: Which approaches lead to successful outcomes
Applications
- Identify underperforming agents based on behavioral patterns
- Optimize tool configurations based on usage data
- Detect workflow inefficiencies
- Train new agents using behavioral templates from top performers
Feature Availability by Plan
| Feature | Free | Developer | Pro | Enterprise |
|---|---|---|---|---|
| Event Capture | Yes | Yes | Yes | Yes |
| Memory Compilation | Yes | Yes | Yes | Yes |
| Context Synthesis | Yes | Yes | Yes | Yes |
| Knowledge Artifacts | Yes | Yes | Yes | Yes |
| Vault (Encrypted) | -- | Yes | Yes | Yes |
| Secret Detection | -- | Yes | Yes | Yes |
| RBAC | -- | -- | Yes | Yes |
| Memory Namespaces | -- | -- | Yes | Yes |
| Policy Engine | -- | -- | Yes | Yes |
| Audit Logging | -- | -- | Yes | Yes |
| Memory Lineage | -- | -- | Yes | Yes |
| Lifecycle Policies | -- | -- | Yes | Yes |
| Adaptive Compiler | -- | -- | -- | Yes |
| Predictive Context | -- | -- | -- | Yes |
| Cross-Agent Transfer | -- | -- | -- | Yes |
| Memory Fingerprints | -- | -- | -- | Yes |
| Behavioral Intelligence | -- | -- | -- | Yes |
| Custom Deployment | -- | -- | -- | Yes |
| SLA | -- | -- | -- | Yes |