Every message grows your context window. Acervo replaces growing history with a constant-size knowledge graph. Here's the proof.
2026-03-27 07:45 — v0.2.2-2 — 6 scenarios
Traditional conversation memory has three fundamental flaws.
Every message adds to the context window. By turn 50, you're sending thousands of tokens of old history with every request.
API costs scale with token count. A 100-turn conversation costs 10x more than the first 10 turns combined.
Truncation discards old messages. The AI forgets what the user said 30 turns ago even though it might be critical now.
The red line (full history) grows with every turn. The green line (Acervo) stays flat because the knowledge graph compresses context.
Real conversation patterns tested end-to-end. Each scenario runs through the full Acervo pipeline with a real LLM.
Projected savings per 1,000 conversations based on real API pricing.
Based on published API pricing (per 1M tokens). Output estimated at 30% of input. Actual costs depend on response length.