Five categories test different aspects of graph-based context retrieval.
100%
Resolve
Can answer project-specific questions?
92%
Ground
Answers grounded in graph, not hallucinated?
67%
Recall
Remembers facts from earlier turns?
100%
Focus
Ignores irrelevant context?
100%
Adapt
Handles graph updates mid-conversation?
12.1x
fewer tokens than an agent with tools
Acervo answers RESOLVE questions with ~616 tokens of warm context.
An agent needs ~7,462 tokens across 2.8 tool-call steps to reach the same answer.
Approach Comparison
How Acervo compares to a stateless LLM and an agent with tools on the same questions.
RESOLVE — 13 turns
Questions that require project-specific knowledge to answer.
Approach
Can Answer
Avg Input Tokens
Avg Steps
Stateless LLM
8%
—
—
baseline
Agent + Tools
100%
7,462
2.8
multi-step
Acervo
100%
616
0
12.1x faster
GROUND — 11 turns
Questions where the answer must be grounded in actual project data, not general knowledge.
Approach
Can Answer
Avg Input Tokens
Avg Steps
Stateless LLM
27%
—
—
baseline
Agent + Tools
100%
5,500
2.3
multi-step
Acervo
91%
600
0
9.2x faster
Per-Turn Efficiency
Token cost comparison for every RESOLVE and GROUND turn across all three projects.
Agent Tokens vs Acervo Tokens
Each bar pair shows one question. Red = agent with tools, green = Acervo graph context. Hover for details.
Component Health
Internal diagnostic scores for each pipeline stage.
78%
S1 Intent
56%
S2 Activation
32%
S3 Budget
81%
S3 Quality
Category × Component Matrix
Where each category is strong or weak across pipeline components.
Category
S1 Intent
S2 Activation
S3 Budget
S3 Quality
Final Score
RESOLVE
73%
50%
0%
100%
100%
GROUND
80%
50%
0%
77%
92%
RECALL
n/a
n/a
n/a
33%
67%
FOCUS
73%
38%
40%
89%
100%
ADAPT
89%
100%
100%
78%
100%
S1 Intent Misclassifications
9 turns where the model classified user intent incorrectly.
Turn 2"What technologies does this project use?"expected: overviewgot: specific
Turn 6"Interesting, this is a well-structured project"expected: chatgot: overview
Turn 19"Ok, I think I understand the project now"expected: chatgot: specific
Turn 1"What is this book about?"expected: overviewgot: specific
Turn 8"These are great detective stories"expected: chatgot: specific
Turn 14"Overall, what themes run through these stories?"expected: overviewgot: specific
Turn 1"What project is documented here?"expected: overviewgot: specific
Turn 2"What documents are available?"expected: overviewgot: specific