Plug-and-play memory infrastructure for B2B AI agents.
Three endpoints. One API key. Your agent goes from
amnesiac to contextually aware in minutes.
No new architecture. Drop three API calls into your existing agent and it remembers everything.
Text → embedding → pgvector. Your agent stores facts, events, preferences. We handle the vectorization.
Cosine similarity on HNSW index. Retrieves the most relevant memories for any query in milliseconds.
The killer feature. Pass the user message, get a context block to prepend to your system prompt. That's it.
Without a single line of prompt engineering. Your LLM receives full context automatically, every time.
Avant Kronvex, votre agent oublie tout. Après, il se souvient de chaque utilisateur.
def chat(user_message): system = "You are a helpful assistant." # pas de contexte response = llm.call( system, user_message ) # rien n'est sauvegardé return response
def chat(user_message): ctx = kronvex.inject(user_message) # ← 1 system = f"{ctx} " + "You are a helpful assistant." response = llm.call( system, user_message ) kronvex.remember(user_message) # ← 2 kronvex.remember(response) # ← 3 return response
Auth via X-API-Key header. Every response typed. EU data residency.
This is the real API running in production. Enter your key and agent ID to start.
POST /auth/keys
Copy full_key
POST /agents
One per use case
POST /remember
After each interaction
POST /inject-context
Before each LLM call
Your agent now
remembers everything