Optimize a Slow ClickHouse Query with Your AI Agent
Use this guide when a production ClickHouse query in your MooseStack app is slower than it should be.
You will set up your AI coding agent with the 514 harness to benchmark the slow query against one or more optimized candidates. By the end you will have a reviewable PR with the winning schema changes and benchmark evidence.
The experiment:
- Profiles the slow query and proposes optimization candidates
- Creates isolated preview databases on 514 hosting for the baseline and each candidate
- Benchmarks every preview with the same test to pick the best performer
- Opens a PR for the winner for your review
This guide assumes:
- MooseStack version 0.6.493 or later (run
npx moose-cli --versionormoose --versionto check) - Your MooseStack app is already deployed on Fiveonefour
- Your coding agent is open on this repository
- The agent can reuse an existing query path or define a query model when profiling finds a pattern without one
Setup
Don't have a MooseStack project yet?
Install the 514 CLI, sign in, link your local MooseStack project to its Fiveonefour counterpart, and run 514 agent init so your coding agent has the harness and skills.
Optimization workflow
This section helps you understand what the agent is doing at each stage of the optimization workflow, what it may ask you to confirm, and how to spot issues early enough to unblock the run.
You can use this workflow whether you already know which query needs work or need the agent to help identify a good optimization target first. It verifies the setup, runs the optimization workflow in preview branch databases, and leaves production untouched. The end result is a PR with the evidence you need to decide whether the change is ready to move forward.
PR review
The experiment is over. The PR is now the review surface. The agent can open a PR, but you still decide what ships.
You should confirm two things here:
- the benchmarks and
EXPLAINsupport the performance claim - any rebuild, backfill, or destructive step is explicit and acceptable before production
Benchmark JSON quick reference
Use this when you need the current field map while reviewing the benchmark step above. Read test-results.json first for the verdict, then benchmark-*.json for the detailed evidence.
test-results.json
{ "success": false, "numPassedTests": 1, "numFailedTests": 1, "numPendingTests": 1, "testResults": [ { "assertionResults": [ { "title": "baseline p95 under threshold", "status": "passed" }, { "title": "scenarios do not regress", "status": "skipped" }, { "title": "EXPLAIN shows index usage in at least one scenario", "status": "failed", "failureMessages": [ "AssertionError: No scenario uses index pruning..." ] } ] } ]}benchmark-*.json
{ "timestamp": "2026-03-26T23:53:29.709Z", "target": { "host": "yey99hpcxa.clickhouse.boreal.cloud", "database": "514-demos-planes-perf-base-f9f0d" }, "tests": { "baseline": { "sql": "SELECT ...", "p50": 22, "p95": 218, "profiles": [ { "queryId": "standalone...", "readRows": 702027, "readBytes": 18254197, "memoryUsage": 49048628, "durationMs": 218 } ] }, "explain": { "baseline": { "sql": "SELECT ...", "explain": { "indexCondition": "true", "selectedGranules": 92, "totalGranules": 92, "granuleSkipPct": 0 } } } }}test-results.json tells you whether the benchmark passed. benchmark-*.json tells you why.
| What to compare | Where in the report | What good looks like |
|---|---|---|
| Did the run pass? | test-results.json -> success | true, or a failure you can explain |
| Which assertion failed? | test-results.json -> testResults[].assertionResults[] | Clear pass/fail/skip for each test |
| Overall p95? | benchmark-*.json -> tests.baseline.p95 | Lower in winner than baseline |
| Scenario regressions? | benchmark-*.json -> tests.scenarioRegression | Present only if scenario test ran; no ratio above threshold |
| Selective scenario pruning? | benchmark-*.json -> tests.explain.<scenario>.explain.indexCondition | At least one selective scenario is not "true" |
| Granules skipped? | benchmark-*.json -> tests.explain.<scenario>.explain.granuleSkipPct | Higher in winner for the selective scenario |
| Rows read? | benchmark-*.json -> tests.baseline.profiles[0].readRows | Lower in winner when the contract is unchanged |
| Memory usage? | benchmark-*.json -> tests.baseline.profiles[0].memoryUsage | Lower in winner when the contract is unchanged |