1. Guides
  2. Improving the Performance of Your Dashboards
  3. Tutorial

Tutorial: Migrate Your Dashboard to ClickHouse

This tutorial is for teams with a production dashboard or report that needs to get faster as data volume grows.

By the end of this guide you’ll have

Real-time replication (CDC) pipeline

A production pipeline that continuously replicates data from your source database into ClickHouse.

ClickHouse analytics layer in code

A MooseStack project that defines views, materialized views, serving tables, and query/API handlers on top of CDC-replicated raw tables.

Safe local-to-production workflow

A local development workflow plus Fiveonefour hosting preview environments and migration plans so you can validate changes before merging to `main`.

You do not need a full rewrite. Keep your source database (Postgres or SQL Server) as the transactional system of record, keep your existing app contracts, and offload analytical reads to ClickHouse via CDC (ClickPipes or Debezium).

This guide gives a path to ship this as normal code, as part of your existing development process. With MooseStack, you model your ClickHouse analytics layer in code, iterate locally, and roll out with preview migrations in Fiveonefour so you can safely iterate on your analytics layer without risking your production ClickHouse data.

Starting from scratch?

If you're building a new dashboard or data-connected chat (not migrating an existing dashboard), start with Chat in Your App.

Want Python Examples?

This guide is written for TypeScript developers, but Python developers can follow along. The concepts translate directly, and MooseStack supports both languages. If you'd like Python-specific examples, let us know in our Slack community and we'll prioritize creating them.

You'll work through:

AI-assisted Development

We recommend using an AI copilot to accelerate the migration to handle complex query translations. However, you can complete every step manually if you prefer. Any AI-enabled editor (Claude Code, Cursor, Codex, Opencode, GitHub Copilot, Windsurf, etc.) will work. Editors that support MCPs can make this workflow even faster.

Project Setup

Create and Deploy MooseStack Project

This sets up the foundation for the rest of this guide. You'll initialize a MooseStack project in your monorepo and deploy your new project to Fiveonefour hosting for preview deployments and migration planning.

Why deploy to Fiveonefour hosting now?

Preview Environments and Migration Planning

When you open a feature branch and PR later in this guide, Fiveonefour can automatically provision a preview environment and generate a migration plan for the code changes in your branch. This ensures your code changes are safe and you can apply them to your main ClickHouse database with confidence.

Sign Up for Fiveonefour

Once this section is complete, you’re ready to continue with local development and the component migration phases in the rest of the tutorial.

Develop with ClickHouse locally

This section covers the local development loop you will use for the rest of the migration: get your environment running, confirm data is flowing, and then migrate one dashboard component at a time. The goal is to keep changes safe and incremental while preserving your existing API contracts.

Prepare your local development environment

First, get your local services and tooling into a known-good state so migration work is testable from the start.

Migrate a dashboard component to ClickHouse

With your local environment ready, migrate one dashboard backend path from OLTP reads to ClickHouse reads with MooseStack.

You only replace the read layer. Routing, auth, request/response contracts, and frontend behavior stay the same. In your MooseStack project, you will define functions that build and run ClickHouse queries using OlapTable objects for type-safe column access, then repoint the existing API handler to call those functions instead of the OLTP query path.

If you set up the optional copilot context workspace in local setup, use it here to organize component-specific files and prompts across all three phases.

Pick a specific dashboard component or report to migrate. You'll work through one component at a time.

Going to production

In this section, you'll apply your local changes to production. Complete these steps in order:

Confirm Fiveonefour hosting project access

To deploy to production (and to create branch preview environments), you need a Fiveonefour hosting project connected to the GitHub repo you’re working in. If you haven’t created a project yet, go back to Create your Fiveonefour project above and complete that setup first.

On this page

Project SetupSet up ClickHouse Cloud ClickPipes (Postgres CDC)Create and Deploy MooseStack ProjectWindows (WSL2) setupDevelop with ClickHouse locallyPrepare your local development environmentSet up Change Data Capture (CDC)Migrate a dashboard component to ClickHouseGoing to production
FiveonefourFiveonefour
Fiveonefour Docs
MooseStackHostingTemplatesGuides
Release Notes
Source527
  • Improving the Performance of Your Dashboards
    • Overview
    • Tutorial
  • Chat in your app

Step Progress

0 of 3 complete

What you need

  • Terminal access on your machine
  • Internet access to run the installer
  • Node.js v20+ installed
  • pnpm v8+ installed
  • Docker Desktop installed and running

What you get

  • moose CLI available in your terminal
  • Local CLI ready for project initialization commands

Workflow

  1. 1

    Install Moose CLI

    bash -i <(curl -fsSL https://fiveonefour.com/install.sh) moose
  2. 2

    Restart your terminal

    MooseStack is installed globally, so you need to restart your terminal to make moose available.

    source ~/.zshrc # or source ~/.bashrc if you're using bash
  3. 3

    Verify Moose CLI is available

    moose --version

    You should see:

    moose X.X.X

Step Progress

0 of 3 complete

What you need

  • A local clone of your project repository and a terminal open to it
  • Docker Desktop installed and running

What you get

  • A running local Moose environment via moose dev
  • Active local services (ClickHouse, Redpanda, and supporting containers)
  • A validated baseline before migration steps

Workflow

  1. 1

    Create a new branch

    git checkout -b <your-branch-name>

    Create a new branch in your repo to isolate your changes. When you open a pull request from this branch, you’ll get a preview environment you can use to validate changes before merging.

  2. 2

    Ensure Docker Desktop is running

    docker info >/dev/null 2>&1 || echo "Docker is not running. Start Docker Desktop (Mac/Windows) or Docker Engine (Linux), then retry."

    If Docker is not running, start it using your OS-specific method and rerun the command.

  3. 3

    Navigate to your MooseStack project directory and run the development server

    moose dev
  4. 4

    Validate expected output

    📍 Available Routes:  Base URL: http://localhost:4000 Static Routes:METHOD ENDPOINT DESCRIPTION --- GET /admin/inframap Admin: Get infrastructure mapGET /admin/reality-check Admin: Reality check - provides a diff when drift is detected between the running instance of moose and the db it is connected toGET /health Health check endpointGET /ready Readiness check endpoint...
  5. 5

    Validate expected Docker Containers

    docker ps
    CONTAINER ID   IMAGE                                                COMMAND                  CREATED              STATUS                        PORTS                                                                      NAMES340d98861d3b   temporalio/ui:2.21.3                                 "./start-ui-server.sh"   About a minute ago   Up 57 seconds                 0.0.0.0:8080->8080/tcp                                                     ts-temporal-ui-12c8a8a75ced3   temporalio/admin-tools:1.22.3                        "tini -- sleep infin…"   About a minute ago   Up 57 seconds                                                                                            ts-temporal-admin-tools-1fcb835863466   temporalio/auto-setup:1.22.3                         "/etc/temporal/entry…"   About a minute ago   Up 57 seconds (healthy)       6933-6935/tcp, 6939/tcp, 7234-7235/tcp, 7239/tcp, 0.0.0.0:7233->7233/tcp   ts-temporal-10c353274770e   clickhouse/clickhouse-server:25.8                    "/entrypoint.sh sh -…"   About a minute ago   Up 52 seconds (healthy)       0.0.0.0:9000->9000/tcp, 9009/tcp, 0.0.0.0:18123->8123/tcp                  ts-clickhousedb-12affed488342   redis:latest                                         "docker-entrypoint.s…"   About a minute ago   Up About a minute             0.0.0.0:6379->6379/tcp                                                     ts-redis-1155ccd538cc7   postgres:13                                          "docker-entrypoint.s…"   About a minute ago   Up About a minute (healthy)   0.0.0.0:5432->5432/tcp                                                     ts-postgresql-1f5a144eeea5e   clickhouse/clickhouse-keeper:25.8                    "/entrypoint.sh sh -…"   About a minute ago   Up About a minute (healthy)   2181/tcp, 9181/tcp, 10181/tcp, 44444/tcp                                   ts-clickhouse-keeper-1be29be802bfb   docker.redpanda.com/redpandadata/redpanda:v24.3.11   "/entrypoint.sh redp…"   About a minute ago   Up About a minute (healthy)   8081-8082/tcp, 9092/tcp, 9644/tcp, 0.0.0.0:19092->19092/tcp                ts-redpanda-1

Step Progress

0 of 3 complete

What you need

  • A subdirectory in <PROJECT_ROOT>/context/dashboard-migration/*/ for your component
  • The API specification for the endpoint
  • The existing backend endpoint handler (e.g. Express route, Fastify handler) that serves the dashboard data
  • The OLTP query file(s) that the handler calls (e.g. SQL builder, ORM query, raw query function)
  • A command to run your existing OLTP backend to capture test data

What you get

  • A ClickHouse parity query that preserves the existing API contract and output semantics
  • Replayable parity test cases with expected vs actual diff evidence

Workflow

  1. 1

    Capture 2-5 representative test cases

    Goal: capture 2–5 representative test cases from the existing OLTP endpoint. These test cases will be used to validate correctness of the translated ClickHouse query you will write in the next checkpoint.

    Steps:

    1. Identify 2–5 representative requests that exercise different filters, time ranges, and edge cases (e.g. empty results, large result sets, boundary dates).
    2. Run each request against the live OLTP endpoint and capture the exact response.
    3. For each test case, create a file: <PROJECT_ROOT>/context/dashboard-migration/<COMPONENT_NAME>/test-cases/0N-<TEST_CASE_NAME>.md.
    4. Each file must include:
      • the curl command (GET or POST)
      • the verbatim JSON response — this must come from actually calling the running endpoint, not from approximation
    5. Choose requests that cover a reasonable recent time window (e.g. last 7–30 days) so parity validation uses a consistent, representative slice.

    For components that read EXTERNALLY_MANAGED tables, local data coverage is controlled in moose.config.toml via [dev.remote_clickhouse] and [dev.externally_managed.tables] (especially sample_size). Set this sample size to a reasonable value for the test cases you are capturing.

    Copy the provided test case template (path: context/migrations/test-case-template.md) to create the test case files:

    cp <PROJECT_ROOT>/context/migrations/test-case-template.md <PROJECT_ROOT>/context/dashboard-migration/<COMPONENT_NAME>/test-cases/0N-<TEST_CASE_NAME>.md

    This checkpoint is complete when 2–5 test case files exist under test-cases/, each with a runnable curl command and the actual JSON response from the OLTP endpoint. Record the file paths in context-map.md.

  2. 2

    ClickHouse parity function

    Goal: write a direct translation of the OLTP query into ClickHouse SQL that returns the exact same response shape.

    Steps:

    1. Locate the SQL query (or stored procedure) that powers the existing endpoint and record OLTP semantics in context-map.md under Phase 1 Notes:
      • Source tables and join conditions (including join type)
      • Filter clauses (including implicit tenancy, soft-delete, or RBAC filters)
      • Parameter substitution rules and defaults
      • Group-by / aggregation logic
      • Edge cases (null handling, division by zero, missing rows)
    2. Create <PROJECT_ROOT>/<SOURCE_DIR>/queries/<COMPONENT_NAME>-olap-translation.ts.
    3. Write a function that:
      • accepts the exact same parameters as the existing handler/query logic
      • builds the translated query using the sql tagged template literal
      • executes via the Moose ClickHouse client against local moose dev
    4. Preserve column names, types, and ordering exactly. Add explicit ORDER BY and casts where needed.
    5. Call out any ClickHouse-specific differences (null handling, decimal precision, timestamp bucketing) as comments in the code.
    6. Record the file path in context-map.md.

    Refer to the ClickHouse SQL reference as you translate.

    Example (adapt to your component — do not copy names literally):

    <PROJECT_ROOT>/<SOURCE_DIR>/queries/<COMPONENT_NAME>-olap-translation.ts
    import { sql, MooseClient } from "@514labs/moose-lib";import { Orders } from "./models/Orders.model"; interface ParityInput {  merchantId: string;  startDate: string;  endDate: string;} interface ParityRow {  day: string;  fulfilled: number;  total: number;} export async function runParity(  params: ParityInput,  client: MooseClient,): Promise<ParityRow[]> {  const statement = sql`    SELECT      toDate(order_ts) AS day,      sumIf(1, status = 'fulfilled') AS fulfilled,      count() AS total    FROM ${Orders}    WHERE merchant_id = ${params.merchantId}      AND order_ts >= toDateTime(${params.startDate})      AND order_ts < toDateTime(${params.endDate})    GROUP BY day    ORDER BY day ASC  `;   return client.query<ParityRow>(statement);}

    Done when: context-map.md contains complete OLTP semantics, the parity function compiles and runs against local moose dev, and the translation file path is recorded in context-map.md.

  3. 3

    Verification

    Goal: prove the parity function returns the exact same JSON as the OLTP endpoint for every test case.

    Iteration rule: treat Phase 1 as a red/green loop. If any diff is non-empty, revise the ClickHouse translation, re-run verification, and repeat until every test case diff is empty.

    Steps:

    1. For each Checkpoint 1 test case, call the parity function with the same parameters and capture the output:

      Run this command from <PROJECT_ROOT> (or whichever directory contains your tsconfig.json and node_modules). The inline tsx script uses a relative import (./<SOURCE_DIR>/queries/<component>-olap-translation) and getMooseUtils() from @514labs/moose-lib, so it depends on your project's TypeScript/module resolution context. If your query file lives elsewhere, adjust the import path accordingly so both runParity and getMooseUtils resolve.

    pnpm tsx -e "
      import { runParity } from './<SOURCE_DIR>/queries/<COMPONENT_NAME>-olap-translation';
      import { getMooseUtils } from '@514labs/moose-lib';
      const { client } = await getMooseUtils();
      const result = await runParity(
        { /* same params as the test case curl */ },
        client,
      );
      console.log(JSON.stringify(result));
    " | jq -S '.' > actual.json
    1. Extract the expected response from the test case file:
    awk 'f{print} /^```json/{f=1; next} /^```$/{if(f){exit}}' \
      <PROJECT_ROOT>/context/dashboard-migration/<COMPONENT_NAME>/test-cases/0N-<TEST_CASE_NAME>.md \
      | jq -S '.' > expected.json
    1. Diff:
    diff expected.json actual.json
    1. If diff is non-empty, debug and fix the translation:
      • Column names (case-sensitive in ClickHouse)
      • Data types (timestamps, decimals, integers vs floats)
      • Sort order (add explicit ORDER BY if needed)
      • Missing rows or availability mismatch for the selected test-case window:
        • confirm local/source data coverage
        • for EXTERNALLY_MANAGED tables, configure local mirrors in moose.config.toml ([dev.remote_clickhouse], [dev.externally_managed.tables], sample_size) and restart moose dev
        • adjust the test window only if needed
    2. Re-run Steps 1-4 for the same test case until diff produces no output.
    3. Repeat Steps 1-5 for every recorded test case.

    Done when: diff produces no output for every test case. Once all test cases pass, Phase 1 is complete — proceed to Phase 2.

Step Progress

0 of 6 complete

What you need

  • New queries and materialized views run locally successfully
  • moose dev starts without errors
  • FIVEONEFOUR_HOST and FIVEONEFOUR_ADMIN_API_BEARER_TOKEN for production

What you get

  • A generated migrations/plan.yaml for this change
  • A concrete preview of production schema operations before deploy

Workflow

  1. 1

    Run the migration plan generator

    Open your terminal (ensure you cd to your MooseStack project root). Then run:

    moose generate migration --save --url <FIVEONEFOUR_HOST> --token <FIVEONEFOUR_ADMIN_API_BEARER_TOKEN>

    Parameters:

    • FIVEONEFOUR_HOST is the host for your production deployment in Fiveonefour hosting. Copy it from the URL in your project overview dashboard:
    • FIVEONEFOUR_ADMIN_API_BEARER_TOKEN is sent in the request header when calling the Fiveonefour hosting Admin API at FIVEONEFOUR_HOST. This is the API key. It is a secret and must not be committed to source control. Store it securely in a password manager.
  2. 2

    Locate generated migration plan

    The command should create a new /migrations directory at the root of your MooseStack project. Open the plan.yaml file in that directory and review it carefully to confirm which SQL resources will be created or modified.