Ψ
Σ
Ω
Δ
Φ
Λ
Π
Θ
Α
Ξ
Γ
Μ
Τ
Ν
Κ
LIVE DEMO → Home Product
Features Use Cases Compare Enterprise
Docs
Documentation Quickstart MCP Server Integrations Performance
Pricing Blog DASHBOARD → LOG IN →
One-liner setup

Connect your AI agent in one command.

Auto-detects Claude Desktop, Claude Code, Cursor, Windsurf, Cline and Gemini CLI. Prompts for your API key and configures everything in seconds.

Auto-setup wizard
$ npx @kronvex/setup

Detects installed agents, writes the MCP config automatically. Or follow a manual guide below. Get your free key.

Supported agents & tools
Claude
Claude
Desktop & Code
Cursor
Cursor
mcp.json
Windsurf
Windsurf
mcp_config.json
Cline
Cline
VS Code extension
Gemini CLI
Gemini CLI
settings.json
GitHub Copilot
GitHub Copilot
VS Code / settings
Kilo Code
Kilo Code
VS Code extension
OpenCode
OpenCode
Terminal CLI
ChatGPT
ChatGPT
Desktop (MCP)
Any MCP Client
Any MCP Client
JSON config
Advanced — non-interactive (CI / scripts)
npx @kronvex/setup --key kv-your-key --agent your-agent-id

Pass --key and --agent to skip the interactive prompts — useful for automated provisioning.

MCP · Auto-detected

Claude Desktop & Claude Code

Add persistent cross-session memory to Anthropic's own clients — Claude Desktop (Mac & Windows) and Claude Code CLI. The Kronvex MCP server exposes three tools the agent calls automatically: remember, recall, and inject_context.

Run the setup wizard — it auto-detects Claude Desktop and Claude Code on your machine and patches the right config file:

npx @kronvex/setup

No flags? The wizard will prompt you interactively. Get your free key.

Agent Rules — teach Claude to use memory automatically

Add these instructions to your CLAUDE.md (or any project's .claude/skills/memory.md) so Claude automatically loads and saves context on every task — without being prompted:

## Memory Protocol (Kronvex)

BEFORE every task:
- Call `kronvex_recall` with the current task description as the query
- Inject any returned memories into your working context

AFTER every task:
- Call `kronvex_remember` to store:
  - Key decisions made
  - Patterns or preferences discovered
  - Architecture choices and their rationale
  - Anything that would help in future sessions

Memory types to use:
- `procedural` — how to do things (preferred approaches, commands, style rules)
- `semantic`   — facts about the project (stack, team, constraints)
- `episodic`   — what happened (bugs fixed, features shipped, blockers hit)
Verify setup
Ask Claude: "Use Kronvex to remember that I prefer TypeScript strict mode."
→ Claude calls kronvex_remember and confirms: "Memory stored."

Ask Claude: "What are my coding preferences?"
→ Claude calls kronvex_recall and returns the memory above.
MCP · Auto-detected

Cursor

Give Cursor's agent mode persistent memory across every coding session. Kronvex remembers your project architecture, conventions, past decisions, and preferences — so you never repeat yourself.

Run the setup wizard — it detects .cursor/mcp.json and patches it automatically:

npx @kronvex/setup

Get your free key.

Cursor Rules — automatic memory on every task

Add this rule to .cursor/rules/kronvex.mdc so Cursor's agent recalls and stores context automatically on each task:

---
description: Persistent memory via Kronvex MCP
globs: ["**/*"]
alwaysApply: true
---

## Kronvex Memory Protocol

BEFORE starting any task:
1. Call `kronvex_recall` with the task description as query
2. Read returned memories and incorporate them into your plan

AFTER completing any task:
1. Call `kronvex_remember` to store:
   - Architecture decisions made
   - Patterns and conventions established
   - Bugs fixed and their root causes
   - Preferences expressed by the user

Use memory_type: "procedural" for how-to knowledge, "semantic" for project facts.
Verify setup
In Cursor agent mode: "Remember that this project uses pnpm workspaces."
→ Cursor calls kronvex_remember → "Memory stored successfully."

Next session: "What's the package manager for this project?"
→ Cursor calls kronvex_recall → returns the memory above.
MCP · VS Code Extension

Cline (VS Code)

Connect Kronvex to Cline's autonomous coding agent inside VS Code. Cline will remember your architecture decisions, coding conventions, and preferences across every session — without being prompted.

Run the setup wizard — it auto-detects Cline's config and patches it automatically:

npx @kronvex/setup

Get your free key.

Cline Rules — automatic memory on every task

Create .clinerules/kronvex.md in your project root to teach Cline to automatically load and save memory:

## Memory Protocol (Kronvex)

BEFORE every task:
1. Call `kronvex_recall` with the current task description as query
2. Incorporate returned memories into your plan

AFTER every task:
1. Call `kronvex_remember` to store:
   - Architecture decisions made
   - Patterns and conventions discovered
   - Bugs fixed and their root causes
   - User preferences and constraints

Use memory_type: "procedural" for how-to knowledge, "semantic" for project facts,
"episodic" for events (bugs, deploys, decisions).
Verify setup
Ask Cline: "Use Kronvex to remember that this project uses ESLint with airbnb config."
→ Cline calls kronvex_remember → "Memory stored successfully."

Next session: "What linter config does this project use?"
→ Cline calls kronvex_recall → returns the memory above.
MCP · Auto-detected

Windsurf

Add persistent cross-session memory to Codeium's Windsurf editor. Cascade (Windsurf's AI agent) will remember your architecture, past decisions, and preferences — so context persists between every session.

Run the setup wizard — it detects ~/.codeium/windsurf/mcp_config.json and patches it automatically:

npx @kronvex/setup

Get your free key.

Windsurf Rules — automatic memory on every task

Create .windsurf/rules/kronvex.md in your project root to teach Cascade to recall and store memory automatically:

## Memory Protocol (Kronvex)

BEFORE every task:
1. Call `kronvex_recall` with the current task description as query
2. Incorporate returned memories into your plan

AFTER every task:
1. Call `kronvex_remember` to store:
   - Architecture and design decisions
   - Patterns and conventions established
   - Preferences expressed by the user
   - Bugs fixed and their root causes

Prefer memory_type "procedural" for workflows, "semantic" for project facts.
Verify setup
Ask Cascade: "Use Kronvex to remember that I prefer dark theme in all projects."
→ Cascade calls kronvex_remember → "Memory stored successfully."

Next session: "What are my UI preferences?"
→ Cascade calls kronvex_recall → returns the memory above.
MCP · Auto-detected

Gemini CLI

Extend Google's Gemini CLI agent with persistent cross-session memory. Your preferences, project context, and past decisions survive between every run — no more repeating yourself.

Run the setup wizard — it detects ~/.gemini/settings.json and patches it automatically:

npx @kronvex/setup

Get your free key.

GEMINI.md — automatic memory on every task

Add this block to GEMINI.md in your project root (or ~/.gemini/GEMINI.md globally) to teach Gemini CLI to use Kronvex automatically:

## Memory Protocol (Kronvex)

BEFORE every task:
1. Call `kronvex_recall` with the current task description as query
2. Incorporate returned memories into your plan

AFTER every task:
1. Call `kronvex_remember` to store:
   - Key decisions and their rationale
   - Patterns discovered in the codebase
   - User preferences and constraints
   - Bugs resolved and root causes

Use memory_type: "semantic" for facts, "procedural" for workflows.
Verify setup
gemini "Use Kronvex to remember that I prefer dark theme in all projects."
→ Gemini calls kronvex_remember → "Memory stored successfully."

Next session: "What are my UI preferences?"
→ Gemini calls kronvex_recall → returns the memory above.
MCP · VS Code settings

GitHub Copilot

Give GitHub Copilot Chat a persistent memory layer — remembers your coding standards, architecture decisions, and personal preferences across every VS Code session.

Run the setup wizard — it auto-detects VS Code and patches the Copilot MCP config:

npx @kronvex/setup

Get your free key.

Copilot Instructions — automatic memory on every task

Create .github/copilot-instructions.md in your project root to teach Copilot to use memory automatically:

## Memory Protocol (Kronvex)

BEFORE every task:
1. Call `kronvex_recall` with the current task description as query
2. Incorporate returned memories into your plan

AFTER every task:
1. Call `kronvex_remember` to store:
   - Architecture and design decisions
   - Coding standards and conventions
   - Personal preferences expressed
   - Bugs fixed and root causes found

Use memory_type: "procedural" for coding conventions, "semantic" for project facts.
Verify setup
In Copilot Chat: "Use Kronvex to remember that our API uses REST with JSON:API spec."
→ Copilot calls kronvex_remember → "Memory stored successfully."

Next session: "What API spec does this project use?"
→ Copilot calls kronvex_recall → returns the memory above.
2 min setup MCP · Universal

Any MCP Client

Kronvex ships as a standard MCP server — works with any client that supports the Model Context Protocol.

Run this command — it auto-detects your installed tools and configures them:

npx @kronvex/setup

The wizard detects your config files and injects the Kronvex MCP server automatically.

Get your free key.

Test your setup
Call tool: kronvex_recall  with  { "agent_id": "test-agent", "query": "test" }
→ Should return an empty list (no memories yet) — confirming the connection works.
2 min setup MCP · VS Code extension

Kilo Code

Add persistent memory to Kilo Code — the VS Code AI coding extension. One command installs and configures Kronvex as an MCP server.

Run the setup wizard — it detects Kilo Code and configures the MCP server automatically:

npx @kronvex/setup

The wizard reads your VS Code settings and injects the Kronvex MCP server into Kilo Code's config.

Test your setup
In Kilo Code chat: "remember that this project uses TypeScript strict mode"
→ Kilo Code calls kronvex_remember → "Memory stored."

Next session: "what are the project conventions?"
→ Kilo Code calls kronvex_recall → returns the memory above.
2 min setup MCP · Terminal CLI

OpenCode

OpenCode is a terminal-based AI coding assistant with full MCP support. Connect Kronvex to give it persistent memory across coding sessions.

Run the setup wizard — it detects OpenCode and configures Kronvex automatically:

npx @kronvex/setup

Get your free key.

Test your setup
Tell OpenCode: "remember that we use pnpm, not npm"
→ calls kronvex_remember → "Memory stored."

Later: "what package manager does this project use?"
→ calls kronvex_recall → returns the memory above.
2 min setup MCP · ChatGPT Desktop

ChatGPT

ChatGPT Desktop supports MCP servers — connect Kronvex to give your ChatGPT sessions persistent, searchable memory.

Run the setup wizard — it detects ChatGPT Desktop and injects the Kronvex MCP server:

npx @kronvex/setup

Restart ChatGPT Desktop after setup to activate the Kronvex memory tools.

Get your free key.

Test your setup
In ChatGPT: "remember that I'm working on a SaaS product called Kronvex"
→ ChatGPT calls kronvex_remember → "Memory stored."

Next session: "what product am I working on?"
→ ChatGPT calls kronvex_recall → returns the memory above.
REST-first · works everywhere

Works with your entire AI stack.

If it can make an HTTP POST, it already works with Kronvex. Native SDKs for Python and Node.js, framework adapters for LangChain, CrewAI and more, plus no-code nodes for n8n and Flowise.

pip install kronvex
npm install kronvex
REST API · X-API-Key header
AI Frameworks & Orchestrators
LangChain / LangGraph
Drop-in persistent memory. Works with ConversationChain, agents, and stateful LangGraph nodes.
Read guide →
CrewAI
Give each crew agent its own persistent memory. Context survives between crew runs and task handoffs.
Read guide →
AutoGen
Persist memories across AutoGen agent conversations. Use recall to seed agent context before each turn.
Read guide →
OpenAI Agents SDK
Add persistent memory to the official OpenAI Agents SDK. Store traces, recall context, inject before each run.
Read guide →
MCP Server
Expose Kronvex tools to Claude, Cursor, Windsurf, Cline and any MCP-compatible host in one command.
Setup guide →
No-Code & Low-Code Platforms
n8n
Community Node
Install n8n-nodes-kronvex directly in your n8n instance. Remember, recall, inject in any workflow.
Read guide →
Flowise
Integrate as an external memory node in Flowise chatflows via the HTTP Request node.
Read guide →
Dify
Call Kronvex from Dify's HTTP node. Store interaction summaries, recall context cross-session.
Read guide →
Botpress / Voiceflow / Rasa
Available via REST API — if your platform supports HTTP requests, Kronvex works today.
REST API docs →
SDKs & Languages
Python SDK
PyPI v0.4.0
pip install kronvex — extras: [langchain] [crewai] [async]
Read guide →
Node.js SDK
npm
npm install kronvex — ESM + CJS, full TypeScript types.
Read guide →
REST API
3 endpoints, 1 header. curl, fetch, httpx, FastAPI, Express — if it sends HTTP, it works.
Read guide →

Don't see your integration?

Kronvex is REST-first — if your tool can make an HTTP request, it works today. Missing a native SDK or tutorial? Let us know.

5 min setup Integration · Microsoft AutoGen

AutoGen

Persist memories across Microsoft AutoGen multi-agent conversations. Use Kronvex to store interaction summaries and recall context before each agent turn.

1. Install dependencies

pip install kronvex pyautogen

2. Get your API key

Get your free key and copy your kv- key.

Test your setup
result = agent.remember("AutoGen integration test")
print(result)  # → {"id": "...", "content": "AutoGen integration test"}
5 min setup Integration · HTTP Request node

Flowise

Integrate persistent memory into Flowise chatflows using the HTTP Request node — no code required.

Add an HTTP Request node with these settings:

Method: POST
URL:    https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/remember
Headers:
  X-API-Key: kv-your-key
  Content-Type: application/json
Body:
  { "content": "{{ $json.message }}" }
5 min setup Integration · HTTP node

Dify

Add cross-session memory to Dify workflows using the HTTP node — store summaries after each conversation, recall context before responses.

Get your free key to replace kv-your-key.

In Dify, add an HTTP node configured as follows:

Method: POST
URL:    https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/remember
Headers:
  X-API-Key: kv-your-key
  Content-Type: application/json
Body:
  { "content": "{{#sys.query#}}" }
2 min setup SDK · pip install

Python SDK

The official Kronvex Python SDK — sync and async support, works with any Python AI framework: FastAPI, LangChain, CrewAI, OpenAI Agents, and more.

1. Install

pip install kronvex
# Async support: pip install "kronvex[async]"

2. Use it

Get your free key to replace kv-your-key.

from kronvex import Kronvex

kv = Kronvex("kv-your-key")
agent = kv.agents("my-agent")  # creates if not exists

# Store a memory
agent.remember("User prefers formal tone", memory_type="preference")

# Recall semantically
memories = agent.recall("tone preference", top_k=5)
print(memories[0].content)  # "User prefers formal tone"

# Inject context into your LLM prompt
ctx = agent.inject_context("What does this user prefer?")
print(ctx.context_block)  # ready-to-use system prompt block
Test your setup
Run the snippet above.
→ Should print: "User prefers formal tone"
2 min setup SDK · npm install

Node.js / TypeScript SDK

The official Kronvex TypeScript/JavaScript SDK — ESM and CJS builds, full type safety with autocomplete.

1. Install

npm install kronvex
# or: yarn add kronvex  |  pnpm add kronvex

2. Use it

Get your free key to replace kv-your-key.

import { Kronvex } from 'kronvex';

const kv = new Kronvex('kv-your-key');
const agent = kv.agents('my-agent');  // creates if not exists

// Store a memory
await agent.remember('User prefers formal tone', { memoryType: 'preference' });

// Recall semantically
const memories = await agent.recall('tone preference', { topK: 5 });
console.log(memories[0].content);  // "User prefers formal tone"

// Inject context into LLM prompt
const ctx = await agent.injectContext('What does this user prefer?');
console.log(ctx.contextBlock);  // ready-to-use system prompt block
Test your setup
Run the snippet above with ts-node or tsx.
→ Should print: "User prefers formal tone"
2 min setup Integration · LangChain / LangGraph

LangChain / LangGraph

Drop Kronvex into any LangChain chain or LangGraph node as a plug-and-play memory component.

Run this command — it auto-detects your installed tools and configures them:

npx @kronvex/setup

The wizard detects your config files and injects the Kronvex MCP server automatically.

Test your setup
memory.save_context({"input": "test"}, {"output": "hello"})
memories = memory.load_memory_variables({})
print(memories)
→ Should print stored context from Kronvex.
2 min setup Integration · CrewAI

CrewAI

Give your CrewAI agents long-term memory that persists across crew runs and tasks.

Run this command — it auto-detects your installed tools and configures them:

npx @kronvex/setup

The wizard detects your config files and injects the Kronvex MCP server automatically.

Test your setup
result = tool._run("remember: project uses Python 3.11")
print(result)
→ Should return a confirmation string from Kronvex.
2 min setup Integration · n8n community node

n8n

Add persistent memory to your n8n AI workflows — store and recall context across automation runs.

Run this command — it auto-detects your installed tools and configures them:

npx @kronvex/setup

The wizard detects your config files and injects the Kronvex MCP server automatically.

Test your setup
Execute a "Remember" node with content: "test memory from n8n"
→ The node output should show { "id": "...", "content": "test memory from n8n" }
2 min setup Integration · OpenAI Agents SDK

OpenAI Agents SDK

Register Kronvex as tools in OpenAI's Agents SDK — your agents remember and recall across every run.

Run this command — it auto-detects your installed tools and configures them:

npx @kronvex/setup

The wizard detects your config files and injects the Kronvex MCP server automatically.

Test your setup
print(remember("Test memory from OpenAI Agents SDK"))
→ Should print: "Stored"
print(recall("test"))
→ Should print: "Test memory from OpenAI Agents SDK"
2 min setup Integration · xAI / Grok SDK

Grok SDK (xAI)

xAI's Grok uses an OpenAI-compatible API — plug Kronvex into your Grok agents using the Python SDK for persistent, searchable memory.

Run the setup wizard to configure the Kronvex MCP server:

npx @kronvex/setup
Test your setup
print(chat("My stack is FastAPI + PostgreSQL"))
# Second call — Grok now remembers your stack
print(chat("What database should I use for a new microservice?"))
→ Grok answers based on the stored context.
2 min setup Integration · Anthropic Agent SDK

Claude Agent SDK

Build Claude-powered agents with persistent memory — every conversation, every tool call remembered and retrievable.

Run the setup wizard to configure Kronvex:

npx @kronvex/setup
Test your setup
print(chat("I'm building a SaaS with Stripe and FastAPI"))
# Second message — Claude remembers your stack
print(chat("What auth approach fits my stack?"))
→ Claude answers with full context from memory.
2 min setup Integration · Google Agent Development Kit

Google ADK

Add Kronvex memory tools to Google's Agent Development Kit — give your ADK agents persistent context across every invocation.

Run the setup wizard to configure Kronvex:

npx @kronvex/setup
Test your setup
result = agent.run("remember: our CI runs on GitHub Actions")
# Later:
result = agent.run("how does our CI work?")
→ Agent recalls the memory and answers correctly.
2 min setup Integration · Vercel AI SDK

Vercel AI SDK

Add persistent memory to Vercel AI SDK agents — remember user preferences, context, and history across every call.

Run the setup wizard to configure Kronvex:

npx @kronvex/setup
Test your setup
// First call
generateText({ prompt: "I prefer TypeScript with strict mode" })
→ Agent calls remember → "Stored"

// Next request — agent has full context
generateText({ prompt: "set up my tsconfig" })
→ Agent generates config with strictMode: true
2 min setup REST API · curl / HTTP

REST API (curl)

Use Kronvex directly over HTTP — works with any language, any runtime, any environment that can make HTTP requests.

Run this command — it auto-detects your installed tools and configures them:

npx @kronvex/setup

The wizard detects your config files and injects the Kronvex MCP server automatically.

Test your setup
Run the store command above.
→ Should return: { "id": "...", "content": "User prefers formal tone", "confidence": 1.0 }
Documentation

Kronvex Memory API

Three endpoints. Persistent context. Production-ready in under 5 minutes.

Base URL : https://api.kronvex.io · All endpoints require X-API-Key header · EU hosted

Quickstart

1

Get your API key

Call POST /auth/demo or sign up. Your key looks like kv-XXXXXXXXXXXXXXXX

2

Register an agent

Create an isolated memory namespace per user, bot, or session.

3

Store → Recall → Inject

Push memories after interactions, recall semantically before responding.

python
# pip install kronvex
from kronvex import Kronvex

client = Kronvex("kv-your-key")
agent = client.agents("my-agent")

# Store a memory
agent.remember("User is based in Paris, prefers concise answers")

# Recall semantically
results = agent.recall(query="user location", top_k=5)

# Inject LLM-ready context block
context = agent.inject_context("What does the user prefer?")
print(context.context_block)  # prepend to your system prompt
import requests

# Auth header
H = {"X-API-Key": "kv-your-key"}
BASE = "https://api.kronvex.io"

# 1. Create agent
agent = requests.post(f"{BASE}/api/v1/agents", headers=H,
  json={"name": "sales-bot"}).json()
agent_id = agent["id"]

# 2. Store a memory
requests.post(f"{BASE}/api/v1/agents/{agent_id}/remember", headers=H,
  json={"content": "User prefers async comms",
       "memory_type": "episodic"})

# 3. Recall semantically
r = requests.post(f"{BASE}/api/v1/agents/{agent_id}/recall",
  headers=H,
  json={"query": "communication style", "top_k": 5})
print(r.json())
const H = {'X-API-Key':'kv-your-key','Content-Type':'application/json'};
const BASE = 'https://api.kronvex.io';

// 1. Create agent
const agent = await fetch(`${BASE}/api/v1/agents`,{
  method:'POST',headers:H,
  body:JSON.stringify({name:'sales-bot'})
}).then(r=>r.json());

// 2. Store memory
await fetch(`${BASE}/api/v1/agents/${agent.id}/remember`,{
  method:'POST',headers:H,
  body:JSON.stringify({
    content:'User prefers async comms',
    memory_type:'episodic'
  })
});

// 3. Recall
const r = await fetch(`${BASE}/api/v1/agents/${agent.id}/recall`,{
  method:'POST',headers:H,
  body:JSON.stringify({query:'comms',top_k:5})
});
console.log(await r.json());
# 1. Create agent
curl -X POST https://api.kronvex.io/api/v1/agents \
  -H "X-API-Key: kv-your-key" -H "Content-Type: application/json" \
  -d '{"name":"sales-bot"}'

# 2. Store a memory (use agent UUID from step 1)
curl -X POST https://api.kronvex.io/api/v1/agents/{id}/remember \
  -H "X-API-Key: kv-your-key" -H "Content-Type: application/json" \
  -d '{"content":"Prefers async comms","memory_type":"episodic"}'

# 3. Recall
curl -X POST https://api.kronvex.io/api/v1/agents/{id}/recall \
  -H "X-API-Key: kv-your-key" -H "Content-Type: application/json" \
  -d '{"query":"comms","top_k":5}'

Authentication

All requests require an X-API-Key header. Keys look like kv-XXXXXXXXXXXXXXXX.

Never expose your API key in client-side code. Always call Kronvex from your backend.

Get a demo key

No sign-up required. Call POST /auth/demo to get a free API key instantly delivered to your inbox.

POST/auth/demoGet a free demo API key
ParameterTypeRequiredDescription
emailstringrequiredYour email address — key will be sent here
namestringoptionalYour name (used in welcome email)
usecasestringoptionalWhat you're building — helps us improve
curl
curl -X POST https://api.kronvex.io/auth/demo \
  -H "Content-Type: application/json" \
  -d '{"email":"[email protected]","name":"Alex","usecase":"support bot"}'

# Response: {"message":"Key sent to your email","plan":"demo"}
# Demo plan: 1 agent · 100 memories · 30 req/min
Use Cases

What you can build with Kronvex

From customer support to legal research — any B2B AI agent that interacts with humans gets measurably better with persistent memory.

SUPPORT

Support agents that know your users

Your AI support agent knows a user's full history — past tickets, preferences, account tier, prior resolutions. No more "can you repeat the issue?" — the agent already knows.

client.agents("support-bot").remember(
  "User John prefers email updates, on Pro plan since Jan, had billing issue in March"
)
ctx = client.agents("support-bot").inject_context("renewal question")
SALES & CRM

Sales agents with perfect deal memory

Every call, every objection, every buying signal — your AI sales agent remembers it all and opens the next conversation exactly where you left off.

# After each sales call, persist deal context
client.agents("sales-bot").remember(
  "Acme Corp: budget €50k, decision by Q3, main blocker is IT security review",
  memory_type="episodic"
)
CODING

Coding assistants with project memory

Your AI dev tool remembers architecture decisions, naming conventions, tech debt notes, and team preferences — producing code that fits your actual codebase.

1. On project start: remember architecture decisions & stack choices
2. Each session: inject_context automatically fills system prompt
3. Result: agent never asks "what's your stack?" again
LEGAL

Legal assistants with case memory

Your legal AI retains full case context — precedents cited, client instructions, document history — across every session. Never re-read the file. Just ask.

PERSONAL AI

Personal assistants with long-term memory

Your AI assistant builds a persistent user profile — preferences, habits, history, goals — and carries that context across every conversation, indefinitely.

client.agents("personal-assistant").remember(
  "User prefers concise replies, morning briefings at 8am, based in Paris"
)
context = client.agents("personal-assistant").inject_context(user_message)
Also used for
Healthcare assistants
Patient history, medication context, care continuity
HR & Recruitment
Candidate profiles, interview notes, hiring context
Marketing automation
Brand voice, campaign history, audience preferences
Research agents
Literature review, citation tracking, hypothesis memory

Store Memory

POST/api/v1/agents/{id}/rememberStore a memory
ParameterTypeRequiredDescription
contentstringrequiredThe memory text
memory_typeenumoptionalepisodic | semantic | procedural (default: episodic)
session_idstringoptionalPin to a specific session
ttl_daysintegeroptionalAuto-expire after N days (1–3650)
pinnedbooleanoptionalPinned memories never expire
POST/api/v1/agents/{id}/recallSemantic recall
ParameterTypeRequiredDescription
querystringrequiredNatural language search query
top_kintegeroptionalResults count (default 5, max 20)
thresholdfloatoptionalMin similarity score (default 0.5)
session_idstringoptionalFilter by session
memory_typestringoptionalFilter by type
context_messagesarrayoptionalLast N messages for contextual re-ranking. Each item: {role, content}
POST/api/v1/agents/{id}/inject-contextReady-to-use context block

Returns a pre-formatted context string ready to inject directly into your LLM system prompt.

ParameterTypeRequiredDescription
agent_idstringrequiredAgent identifier
messagestringrequiredUser message (used as search query)
DEL/api/v1/agents/{agent_id}/memories/{id}Delete a memory

Permanently removes a memory by UUID. Returns 204 No Content. GDPR-compliant targeted deletion.

GET/api/v1/agents/{agent_id}/healthMemory health scores

Returns memory health scores for an agent.

Response fieldTypeDescription
coverage_scorefloatHow broadly topics are covered (0–1)
freshness_scorefloatRecency of stored memories (0–1)
coherence_scorefloatInternal consistency of the memory set (0–1)
utilization_scorefloatProportion of memory quota in use (0–1)
recommendationsarrayActionable suggestions to improve memory health
POST/api/v1/agents/{agent_id}/consolidateTrigger memory consolidation

Manually trigger memory consolidation. Clusters semantically similar memories and merges them into meta-memories via GPT-4o-mini. Runs in background.

response
{
  "status": "consolidation_queued",
  "agent_id": "..."
}

Recall

Search your agent's memory semantically. Returns memories ranked by vector similarity.

POST /api/v1/agents/{agent_id}/recall
querystring · requiredNatural language query to search memories
top_kinteger · default 5Number of results to return (max 20)
thresholdfloat · default 0.5Minimum similarity score (0–1)
memory_typestring · optionalFilter by type: semantic, episodic, procedural
context_messagesarray · optionalLast N conversation messages for contextual re-ranking. Each item: {role, content}. When provided, results are re-ranked by GPT-4o-mini based on conversation context.
python
import requests

response = requests.post(
    "https://api.kronvex.io/api/v1/agents/{agent_id}/recall",
    headers={"X-API-Key": "kv-your-key"},
    json={
        "query": "user communication preferences",
        "top_k": 5,
        "threshold": 0.5
    }
)
# Returns: {query, results: [{memory, similarity}], total_found}
Performance note: First-time queries call OpenAI embeddings (~2–4s). Repeated identical queries are served from an in-process LRU cache (<5ms). The pgvector similarity search itself runs in <50ms.

Inject Context

The most powerful endpoint. Pass the user's latest message — get back a formatted context block ready to inject into your LLM system prompt.

 This endpoint combines recall + formatting in one call. Use it for every LLM turn.
POST /api/v1/agents/{agent_id}/inject-context
messagestring · requiredThe user's current message
top_kinteger · default 5Memories to inject (max 20)
thresholdfloat · default 0.5Minimum relevance score
python
ctx = requests.post(
    "https://api.kronvex.io/api/v1/agents/{agent_id}/inject-context",
    headers={"X-API-Key": "kv-your-key"},
    json={"message": user_message}
).json()["context_block"]

# Inject into your LLM
response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": ctx + your_system_prompt},
        {"role": "user", "content": user_message}
    ]
)

Agents

Each agent has an isolated memory namespace. Create one per use case or customer.

POST /api/v1/agents Create agent
namestring · requiredUnique name for this agent
descriptionstring · optionalHuman-readable description
metadataobject · optionalArbitrary key-value metadata
GET /api/v1/agents List agents

Returns all agents for your API key with memory count.

Delete Memory

Delete a specific memory or all memories for an agent.

MethodPathDescription
DELETE/api/v1/agents/{id}/memories/{mem_id}Delete one memory by ID
DELETE/api/v1/agents/{id}/memoriesDelete ALL memories for an agent

Memory Types

Clotho · Lachesis · Atropos — Les trois Moires gouvernent le fil de la vie. Kronvex les mappe aux trois types de mémoire : ce qui est stocké, ce qui est rappelé, ce qui expire.

SEMANTIC

Facts and persistent knowledge about the user or world.

"User works in fintech" · "Located in Paris"

EPISODIC

Past events and interactions — what happened in previous sessions.

"Reported billing issue on Oct 3"

PROCEDURAL

How the user wants things done — behavioral preferences.

"Always reply in French" · "Use bullet points"

FACT

Stable facts about a person, entity, or the world. Slow decay (180 days).

"Company founded in 2018" · "CEO is Alice"

PREFERENCE

How someone likes things done. Medium decay (60 days).

"Prefers short answers" · "Wants code examples"

CONTEXT

Situational/temporary info. Fast decay (3 days).

"Currently evaluating enterprise plan" · "In a meeting"

TTL & Decay

Set ttl_days to auto-expire memories. Use pinned: true to prevent any expiry.

python
# Expire in 30 days (e.g. trial period)
requests.post(f"{BASE}/memories", headers=H,
  json={"agent_id":"bot","content":"Trial active","ttl_days":30})

# Pin permanently — never expires
requests.post(f"{BASE}/memories", headers=H,
  json={"agent_id":"bot","content":"VIP customer","pinned":True})

Confidence Scoring

Each recalled memory includes a confidence score (0–1). Higher = more relevant. The score combines semantic similarity, recency, and access frequency.

formula
confidence = similarity × 0.6
           + recency    × 0.2   # sigmoid, 30-day inflection
           + frequency  × 0.2   # log-scaled access count
  Use the threshold parameter on /recall to filter results below a minimum confidence score.

SDKs & Integrations

Official SDKs for Python and Node.js. Both sync and async clients available. The SDK mirrors the REST API exactly — use whichever style you prefer.

Python SDK

INSTALL
pip install kronvex
# Async support: pip install "kronvex[async]"
python
from kronvex import Kronvex

client = Kronvex("kv-your-key")

# Get or create an agent (idempotent)
agent = client.agents("my-agent")

# Store a memory
agent.remember("User is based in Paris, prefers concise answers")

# Recall semantically
results = agent.recall(query="user location", top_k=5)

# Inject context into your LLM system prompt
context = agent.inject_context("What does the user prefer?")
print(context.context_block)  # paste into your system prompt

Node.js / TypeScript SDK

INSTALL
npm install kronvex
# or: pnpm add kronvex / yarn add kronvex
typescript
import { Kronvex } from 'kronvex';

const client = new Kronvex('kv-your-key');
const agent = client.agents('my-agent');

await agent.remember('User prefers dark mode and concise replies');
const ctx = await agent.injectContext('What UI preferences does the user have?');
console.log(ctx.contextBlock);  // inject into your LLM system prompt
  All SDK methods map 1:1 to the REST API. You can always drop down to raw requests / fetch if you prefer.

Error Codes

StatusCodeDescription
400INVALID_REQUESTMissing or malformed parameters
401UNAUTHORIZEDMissing or invalid API key
429LIMIT_REACHEDMemory or agent quota exceeded
404NOT_FOUNDAgent or memory not found
429RATE_LIMITEDToo many requests — back off and retry
500INTERNAL_ERRORServer error — contact support

Rate Limits

All plans are billed monthly. The Demo plan is free forever. Upgrade at any time from your dashboard.

PlanPriceAgentsMemoriesReq/minReq/day
DemoFree110030500
Builder€29/mo520,0001201,000
Startup€99/mo1575,0003005,000
Business€349/mo50500,00060025,000
Growth€599/mo30300,0001,000100,000
Scale€1,499/moUnlimitedUnlimited2,000Unlimited
EnterpriseCustomUnlimitedUnlimitedCustomCustom

Rate limit response headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset

  When the 429 LIMIT_REACHED error fires, memories or agents are capped. Upgrade your plan or delete old memories from the dashboard.

🦜 LangChain

Use Kronvex as a custom memory store in LangChain conversation chains. The pattern: store memories after each interaction, recall relevant ones before each LLM call.

Install

pip install kronvex langchain langchain-openai

Custom memory class

from kronvex import Kronvex
from langchain.memory import BaseMemory
from langchain.schema import BaseMessage
from typing import Dict, Any, List

class KronvexMemory(BaseMemory):
    """Persistent cross-session memory powered by Kronvex."""

    client: Any = None
    agent: Any = None
    memory_key: str = "history"

    def __init__(self, api_key: str, agent_id: str, **kwargs):
        super().__init__(**kwargs)
        self.client = Kronvex(api_key)
        self.agent = self.client.agent(agent_id)

    @property
    def memory_variables(self) -> List[str]:
        return [self.memory_key]

    def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
        query = inputs.get("input", "")
        ctx = self.agent.inject_context(query=query, top_k=5)
        return {self.memory_key: ctx.context}

    def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> None:
        user_msg = inputs.get("input", "")
        ai_msg = outputs.get("output", "")
        if user_msg:
            self.agent.remember(content=user_msg, memory_type="episodic")
        if ai_msg:
            self.agent.remember(content=ai_msg, memory_type="episodic")

    def clear(self) -> None:
        pass  # Use Kronvex dashboard to manage memories

Use in a chain

from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain

memory = KronvexMemory(
    api_key="kv-your-api-key",
    agent_id="your-agent-id"
)

chain = ConversationChain(
    llm=ChatOpenAI(model="gpt-4o"),
    memory=memory,
    verbose=True
)

# Context from Kronvex is automatically injected
response = chain.predict(input="What are my preferences?")
💡 Tip: Set session_id in inject_context() to scope memories per conversation thread.

🤖 CrewAI

Give your CrewAI agents persistent memory across runs. Agents can recall past interactions and store new knowledge automatically.

Install

pip install kronvex crewai

Memory tool for CrewAI agents

from crewai import Agent, Task, Crew
from crewai.tools import tool
from kronvex import Kronvex

kv = Kronvex("kv-your-api-key")
agent_mem = kv.agent("your-agent-id")

@tool("Recall from memory")
def recall_memory(query: str) -> str:
    """Search past memories relevant to the query."""
    result = agent_mem.recall(query=query, top_k=5)
    return "
".join([m.content for m in result.memories])

@tool("Store in memory")
def store_memory(content: str) -> str:
    """Store a new piece of information in long-term memory."""
    agent_mem.remember(content=content, memory_type="semantic")
    return "Memory stored successfully."

# Attach tools to your agent
sales_agent = Agent(
    role="Sales Assistant",
    goal="Help customers based on their history",
    tools=[recall_memory, store_memory],
    verbose=True
)

🕸 LangGraph

Add memory nodes to any LangGraph StateGraph. The recall node runs before your LLM node and injects relevant context; the store node persists the exchange after completion.

Install

pip install "kronvex[langgraph]" langgraph

Memory nodes in a StateGraph

from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated, Optional
import operator

from kronvex.integrations.langgraph import make_recall_node, make_store_node

class AgentState(TypedDict):
    messages: Annotated[list, operator.add]
    memory_context: Optional[str]

recall_node = make_recall_node("kv-your-api-key", "your-agent-id")
store_node  = make_store_node("kv-your-api-key", "your-agent-id")

builder = StateGraph(AgentState)
builder.add_node("recall", recall_node)
builder.add_node("agent",  call_model)   # your LLM node
builder.add_node("store",  store_node)
builder.set_entry_point("recall")
builder.add_edge("recall", "agent")
builder.add_edge("agent",  "store")
builder.add_edge("store",  END)
graph = builder.compile()
💡 Tip: memory_context is automatically added to AgentState by the recall node and is available to your LLM node.

⚙️ AutoGen

Give AutoGen agents persistent cross-session memory. Inject relevant context before each run, store the exchange after.

Install

pip install "kronvex[autogen]" pyautogen

Persistent context for AutoGen agents

from kronvex.integrations.autogen import KronvexMemory

mem = KronvexMemory(api_key="kv-your-api-key", agent_id="your-agent-id")

# Before agent run — inject relevant memories into the system message
context = mem.inject_context(user_message)
system_msg = f"You are a helpful assistant.\n\n{context}"

# ... run your AutoGen agent with system_msg ...

# After agent run — store the exchange
mem.remember(f"User: {user_message}")
mem.remember(f"Assistant: {ai_response}")

🤖 OpenAI Agents SDK

KronvexHooks implements the RunHooks interface. Pass it to Runner.run() and memory injection/storage happens automatically.

Install

pip install "kronvex[openai-agents]" openai-agents

RunHooks-based memory layer

from agents import Agent, Runner
from kronvex.integrations.openai_agents import KronvexHooks

hooks = KronvexHooks(
    api_key="kv-your-api-key",
    agent_id="your-agent-id",
    session_id="user-42",   # optional — isolates memories per user
)

result = await Runner.run(
    agent,
    messages=[{"role": "user", "content": "Hello"}],
    hooks=hooks,
)
💡 How it works: on_agent_start recalls relevant memories and prepends them to the system prompt; on_run_end stores the exchange automatically.

🌊 Flowise

No SDK install required. Use Flowise's built-in HTTP Request node to call the Kronvex REST API directly from your flow.

HTTP Request node — Recall memories

Method : POST
URL    : https://api.kronvex.io/api/v1/agents/{agent_id}/recall
Headers:
  X-API-Key : kv-your-api-key
  Content-Type : application/json
Body (JSON):
{
  "query": "{{question}}",
  "top_k": 5
}

HTTP Request node — Store a memory

Method : POST
URL    : https://api.kronvex.io/api/v1/agents/{agent_id}/remember
Headers:
  X-API-Key : kv-your-api-key
  Content-Type : application/json
Body (JSON):
{
  "content": "{{output}}",
  "memory_type": "episodic"
}
💡 Tip: Chain two HTTP Request nodes — recall at the start of the flow to inject context, store at the end to persist the answer.

⚙️ n8n

Official community node available — install n8n-nodes-kronvex v0.1.1 directly from your n8n instance's Community Nodes settings. Or use the HTTP Request node with the REST API.

Settings → Community Nodes → Install → n8n-nodes-kronvex

Store a memory

FIELD VALUE
MethodPOST
URLhttps://api.kronvex.io/api/v1/agents/{{AGENT_ID}}/remember
HeaderX-API-Key: kv-your-api-key
Body (JSON){"content": "{{$json.content}}", "memory_type": "episodic"}

Recall memories

POST https://api.kronvex.io/api/v1/agents/{{AGENT_ID}}/recall
X-API-Key: kv-your-api-key

{
  "query": "{{ $json.userMessage }}",
  "top_k": 5,
  "session_id": "{{ $json.sessionId }}"
}
💡 n8n pattern: Trigger → HTTP (recall) → AI Agent node → HTTP (store response) → done. Memories persist between workflow executions automatically.

Performance

Real numbers from Kronvex's pgvector infrastructure — latency, precision, and throughput at scale. All measurements over a rolling 7-day window from production traffic.

Infrastructure: Supabase eu-central-1 (Frankfurt) · pgvector HNSW m=16 ef=64 · 1536-dim text-embedding-3-small · Railway EU West · async Python (FastAPI)
Headline numbers
<45ms
Recall p50
TOP-5 MEMORIES
<55ms
inject_context p50
FULL CONTEXT PROMPT
<120ms
remember p50
WRITE + EMBED
87%
Recall@1 accuracy
200 QUERY PAIRS
94%
Recall@3
Correct result in top 3
38ms
P50 median recall
P95 at 52ms — dataset: 200 pairs

Last benchmark run: 2026-04-06 · Full benchmark page →

End-to-end latency by operation

Measured from HTTP request received at Railway (EU West) to response sent. Includes Supabase query round-trip and embedding lookup where applicable. p50 / p95 / p99 over rolling 7-day window.

Operation p50 p95 p99
POST /recall
Top-5 memories · cosine similarity
<45ms <140ms <280ms
POST /inject-context
Recall + prompt assembly
<55ms <160ms <320ms
POST /remember
Write + embed + store
<120ms <380ms <700ms
GET /agents
List agents · indexed lookup
<30ms <90ms <180ms

The dominant cost in /remember is the OpenAI embedding call (~80ms). Pure DB write latency is <20ms.

Confidence scoring

Raw cosine similarity alone misses two signals that matter for agents: how recent a memory is, and how often it's been accessed. Kronvex combines all three into a composite confidence score.

confidence = similarity × 0.6 + recency × 0.2 + frequency × 0.2
Similarity — cosine distance in 1536-dim space (60%)
Recency — sigmoid decay, 30-day inflection (20%)
Frequency — log-scaled access count (20%)
1
"User prefers annual billing and pays by SEPA direct debit."
Similarity0.91
Recency (2d)0.96
Frequency (12×)0.82
Confidence0.910
2
"User asked about invoice history on March 12th."
Similarity0.78
Recency (18d)0.62
Frequency (3×)0.48
Confidence0.690
3
"User mentioned they work in finance during onboarding."
Similarity0.71
Recency (45d)0.38
Frequency (1×)0.20
Confidence0.550

Memory #1 wins even if #2 has slightly higher semantic similarity — because it is recent and frequently accessed.

Memory type coverage

Most memory APIs store a single type of fact. Kronvex handles semantic, episodic, and procedural memory natively — with the same three endpoints.

Type Examples Kronvex Mem0 Zep
Semantic Facts, preferences, attributes Native Native Native
Episodic Sessions, events, conversation history Native ~ Via SDK Native
Procedural Workflows, rules, instructions Native Not supported ~ Manual tagging
Conflict handling Resolve contradictory memories Confidence-based ~ Graph dedup Manual
GDPR erasure DELETE /agents/{id} — wipe all memories One API call ~ Partial ~ Partial

Want the full comparison? See how Kronvex compares to Mem0, Zep, and Pinecone →

Test it yourself

See these numbers in action

Get a free demo key and call the API. First recall in under 2 minutes.

Get free API key → Full benchmark page →

Enterprise

Multi-tenant architecture, GDPR by default, EU data residency. Built for companies shipping AI agents to their own customers — not just developers building for themselves.

EU Data Residency (Frankfurt) <45ms Recall P99 99.9% Uptime SLA Memories on Enterprise

Enterprise Features

Everything you need to deploy AI agent memory at scale, with compliance guarantees European enterprises require.

Unlimited agents (Enterprise plan)
Custom memory quotas
Webhooks (memory.created, memory.recalled)
GDPR Art. 17 — right to erasure via DELETE
Memory TTL & automatic expiry policies
EU data residency (Supabase, Frankfurt)
Data Processing Agreement (DPA)
Audit logs & log export
Dedicated support — SLA <4h response
Custom pricing for high-volume
Onboarding session with technical team
Invoice billing (no credit card required)

Security & Compliance

Enterprise-grade security controls and compliance documentation for regulated industries.

🔒 GDPR Compliant

Full GDPR compliance by design — EU data residency, right to erasure (Art. 17), data minimization (Art. 5), DPA available (Art. 28).

📋 DPA Included

Data Processing Agreement (GDPR Art. 28) available for all paid plans. Signable on request for enterprise contracts.

🇪🇺 EU Data Residency

All data stored on Supabase PostgreSQL, AWS eu-central-1 Frankfurt. Zero cross-border data transfers.

🔐 Encryption

AES-256 at rest · TLS 1.3 in transit · API keys SHA-256 hashed, never stored in plaintext.

🏗️ Private Deployment

Dedicated instance or VPC deployment available on Enterprise contracts. Contact us to discuss architecture options.

✅ SOC 2 (Planned)

SOC 2 Type II audit on roadmap for 2026. Architecture already designed to meet SOC 2 Trust Service Criteria.

🇪🇺 🔒 Hosted in Frankfurt, Germany — AWS eu-central-1. All data stored and processed within EU borders. No cross-border transfers.

Multi-tenant Architecture

One API key manages hundreds of isolated agent contexts. Your customers' data never crosses. Isolation is enforced at the database level — not just at the application layer.

structure
Your API Key
    ├── Agent: customer_001  → memories isolated ✓
    ├── Agent: customer_002  → memories isolated ✓
    ├── Agent: customer_003  → memories isolated ✓
    └── Agent: internal_bot  → memories isolated ✓

Why Enterprise Memory?

The business case for persistent AI agent memory — backed by numbers from production deployments.

37%
reduction in support tickets with memory-enabled agents
2.4×
higher CSAT for AI assistants with persistent context
2M+
memories stored per month by average enterprise customer

Onboarding Process

From first contact to live deployment — a structured process designed for enterprise teams.

1

Discovery call

30-minute call with our team. We understand your use case, agent architecture, data volumes, and compliance requirements.

2

Custom plan

We propose a tailored plan — memory quotas, agent limits, SLA level, DPA, and pricing — matching your exact needs and volume.

3

Integration support

Dedicated technical onboarding session. We review your integration, help configure webhooks, and validate data isolation for your multi-tenant setup.

4

Go live

Production deployment with your dedicated account manager on standby. SLA active from day one. Audit logs enabled.

Enterprise Pricing

Transparent plans for scale. Custom contracts for enterprise volume.

Business
€349/month
Self-serve. No contract required.
  • 50 agents
  • 500k memories
  • Webhook integrations
  • Priority support
  • EU data residency · GDPR Art. 17
See all plans
Enterprise
Custom pricing
Volume contracts. Invoice billing available.
  • Unlimited agents & memories
  • Data Processing Agreement (DPA)
  • SLA <4h response time
  • Dedicated account manager
  • Custom onboarding session
  • Invoice billing (no credit card)
  • Audit log export
Contact Sales →

Compare

EU-hosted and GDPR-native by default. pgvector cosine similarity, confidence scoring, no infrastructure to manage. Your first memory in under 5 minutes.

<40ms
Recall P99
25×
Cheaper than Zep
99.9%
Uptime SLA

Problems with Alternatives

Every other option has a hidden cost.

Zep
Deprecated & expensive

Zep shut down their free community edition. Cloud pricing starts at $475/mo — and your data stays in the US with no EU residency option.

Cloud starts at $475/mo · No EU data residency
Mem0
LLM in the write path

Mem0 routes every memory write through an LLM, costing ~$0.002/write. At 100K writes/month that's $200 in LLM costs alone — before your actual AI usage.

100K writes/mo = ~$200 in LLM costs alone
Roll-your-own
Weeks of setup & maintenance

pgvector + Redis + custom embedding pipeline + confidence scoring = 2–3 weeks of setup. Then you own the infrastructure forever.

2–3 weeks setup · Infinite maintenance

Feature Comparison

See exactly what you get compared to Mem0, Zep, MemGPT, and Pinecone.

Feature ★ Kronvex Mem0 Zep Pinecone
Managed API (no infra)
EU hosting (GDPR native)
Semantic search (pgvector cosine)
Confidence scoring (recency + frequency)
Multi-agent support
No LLM at write time
inject-context endpoint
Python SDK
Node.js SDK
REST API (language-agnostic)
Pricing from €29/mo From $19/mo From $475/mo Pay-per-use

Pricing Comparison

Zep Cloud
$475/mo
Community edition deprecated
No EU data residency
BEST VALUE
Kronvex
€29/mo
Demo key free, no credit card
🇪🇺 EU data residency included
Mem0
$19/mo
+ LLM costs on every write
No EU data residency

Kronvex is up to 25× cheaper than Zep Cloud — with EU data residency included at every plan level.

Why Choose Kronvex

🇪🇺 EU-native

Built for European companies from day one. EU data residency, GDPR compliance, and DPA included — not bolted on as an afterthought.

Frankfurt · Supabase EU · Zero US transfers

⚡ No LLM overhead

We don't run an LLM on every write. Pure vector embedding — fast, predictable, and affordable at any scale.

<40ms recall · €29/mo flat · No surprise costs

🧠 Confidence scoring

Memories ranked by similarity × recency × frequency. Your agent recalls the most relevant context, not just the most recent.

score = sim×0.6 + recency×0.2 + freq×0.2
Ready to switch? Start with the Quickstart → — your first memory in under 5 minutes, no credit card required.
Free access
Get your API key

100 free memories. No credit card required.

Already have an account? Sign in →