Building a data-aware chat on top of your ClickHouse database with Next.js and MooseStack
TL;DR
- It can be simple to set up high performance chat in your applications.
- That baseline delivers real value.
- Then extend it: richer context, more sources, visualizations.
This guide will:
- Help define when and how to prototype in app chat and get it used
- Help you improve your chat with rich context, and connect rich sources to your chat
- Help you embed chat into your existing Next.js application
Overview
This guide is about building a conversational interface embedded directly into your product that can answer questions by querying your actual data. Users ask in natural language, and the system responds with results grounded in live queries and the context of your schema, metrics, and permissions.
This works because modern LLMs can call tools: they can inspect table metadata and run structured queries through a controlled interface (in this guide, an MCP server over ClickHouse), then explain the output in plain language.
What this guide is not about: a generic chatbot that summarizes docs, a RAG-style assistant fed with your knowledge base, or a support bot optimized for ticket deflection. This means chat that can actually access and query your data: it can inspect schemas, run SQL, compute aggregates, and explain real results inside the workflows your app already supports, and that data access is the focus of this guide.
Why chat is worth building, and why data is key
A data-connected chat interface makes analytics more available and more usable: users ask questions directly, and the system answers by querying your data rather than forcing dashboard navigation.
Benefits
Democratize data access
Non-technical users can query complex datasets without SQL or dashboard training.
Reduce friction
Eliminate the five-click problem: users ask instead of navigating menus, filters, and dashboards.
Context-aware exploration
Conversation history enables follow-ups and iterative refinement without starting over.
Faster decisions
Answers arrive in seconds instead of waiting on analysts or learning a new interface.
Lower support load
Self-serve reduces repeated questions and tickets to data and analytics teams.
Common use cases
Data-connected chat works best when users are trying to answer concrete questions from live, structured data, especially when the questions change from moment to moment and you cannot realistically pre-build every dashboard and drill-down.
Use Cases
Internal analytics and business intelligence
Chat is a fast layer over your existing metrics and dashboards. Executives can ask follow-ups without waiting on an analyst. Business users can generate ad-hoc cuts of the data without learning SQL. Analysts benefit too: chat is an efficient starting point for exploration when first familiarizing with a dataset.
Customer-facing analytics products
If your product already has analytics value, chat can make that value feel immediate. SaaS customers can ask about usage, adoption, and trends without needing a bespoke dashboard for every role. E-commerce operators can explore sales patterns with natural follow-ups. Customers get a flexible interface that adapts to their questions.
Operational workflows
Chat can serve as a query interface for support and ops workflows. DevOps teams can use chat for chatops-style exploration of metrics. Supply chain teams can query stock levels and fulfillment timelines. The goal is rapid, auditable data lookup: the system shows what it queried and why.
Across all of these, the pattern is the same: chat is most valuable when it is grounded in the systems of record, fast enough to support iterative exploration, and integrated into the workflows where decisions actually get made.
Implementation strategy: fast baseline, then iterate
Getting to “working” is relatively quick because the core plumbing is now mostly standardized: an LLM that can call tools, an MCP server that exposes your database safely, and an OLAP backend that can answer analytical questions fast. MooseStack packages those pieces so you can stand up a baseline end-to-end without spending weeks on glue code, and the Next.js UI in this guide is just a reference client (the backend works with any frontend).
The rest of the effort is still worth it because the quality of a data chat experience is determined by product-specific details: how your data is modeled, how business terms map to tables and metrics, what permissions and tenancy rules apply, and how results should be presented inside your existing workflows. Once the baseline is live, you can iterate efficiently on accuracy, latency, context, and UX to capture the remaining value without rebuilding the foundation.
When this approach is worth building
Chat is worth investing in once your product has meaningful data gravity: users regularly ask questions that require slicing, aggregation, or comparison, and the existing UI forces them through dashboards, filters, or support channels to get answers. If you already maintain analytics views or a metrics layer, but users still need help finding or interpreting results, chat can reduce that friction immediately.
It is especially compelling when your team is fielding repeat questions, building one-off dashboards for narrow use cases, or struggling to productize a long tail of analytical queries. In those cases, chat becomes a flexible interface that scales with your schema, rather than another surface area you have to constantly redesign.
Build Decisions
What this guide is
This is an execution guide for shipping a data-connected chat feature with MooseStack and a Next.js reference client. It helps you make the key upfront decisions, get to a working baseline quickly (via the tutorial), then iterate toward production quality by improving context, adding sources, and embedding chat into your app experience (not leaving it as a demo UI).
What this guide is not
- A generic chatbot or a support bot, or RAG recipe, or guide to fine-tuning your chat or creating agents for common user questions.
- A one-size-fits-all architecture. This is a set of patterns you adapt to your constraints.
- A promise that “LLM answers are correct by default.” You will design for verification, observability, and guardrails.
Implementation decisions
The decisions below define the shape of your chat system. The template defaults to a simple, opinionated stack (internal-first deployment, ClickHouse with rich metadata, SQL-backed tools via MCP, Claude Haiku as the model) and shows where to extend or harden as requirements grow.
| Decision | Options | Implementation |
|---|---|---|
| Data access scope | Narrow (specific tables) vs Broad (full schema) | Template uses readonly mode with row limits in mcp.ts; extend with table allowlists |
| Data sources | Batch (S3/Parquet), Operational (streams, OLTP replicas) | Tutorial covers S3/Parquet bulk load; see Ingest docs for streaming |
| Latency optimization | Raw tables, Materialized views, Denormalized models | Define MVs in moosestack-service/app/ for time buckets, top-N, common group-bys |
| Schema context | None, Table comments, Column comments with semantics | get_data_catalog tool exposes column comments; add JSDoc comments to your data models (see Appendix) |
| MCP tools | query_clickhouse only, Add catalog discovery, Add search/RAG | Both tools in moosestack-service/app/apis/mcp.ts; extend serverFactory() to add search or other tools |
| Model provider | Anthropic Claude, OpenRouter, Others | Set ANTHROPIC_API_KEY; change model in agent-config.ts; see example apps for OpenRouter integration |
| Deployment scope | Internal only, Customer-facing | Ship internal with audit trail; add governance before customer-facing |
| Authentication | No auth, API key, User JWT passthrough | Template uses PBKDF2 API key auth in mcp.ts; upgrade to JWT for user-scoped access |
| Access controls | Tool allowlists, Scoped views, Row-level security | Template enforces ClickHouse readonly mode; add scoped views for tenants; plan RLS for enterprise |
Tutorial: From Parquet in S3 to Chat Application
Prerequisites
- GitHub account
- Claude API key
- Boreal account
- Vercel account
- Node.js v20+
- pnpm v8+
- Docker Desktop (must be running)
- Moose CLI
- Recommended: a copilot like Claude Code, Cursor, or Codex
Data source
This tutorial requires a Parquet data source in S3. Bring your own, or use the Amazon Customer Reviews dataset (150M+ rows, no auth required).
In this tutorial, you’ll bootstrap the MooseStack MCP template, load real Parquet data from S3 into ClickHouse, validate queries via the MCP-enabled chat UI, then deploy the backend to Boreal and the Next.js app to Vercel.
Existing app?
If you already have a Next.js app that you want to add this chat to, see Adding Chat to Your Existing Next.js Application.
Architecture & Scope
This tutorial bootstraps a complete chat-over-data application using the MooseStack MCP template with Parquet data from S3.
What you'll build:
| Component | Description |
|---|---|
| Chat UI | Resizable panel with message input/output, tool result rendering |
| API Route | Handles chat requests, streams responses from Claude |
| MCP Server | Exposes query_clickhouse and get_data_catalog tools |
| Authentication | Bearer token flow between frontend and backend |
| Production deployment | Backend on Boreal, frontend on Vercel |
What this covers:
- Installing the Moose CLI and initializing the template
- Modeling your data and bulk-loading from S3 into ClickHouse
- Testing chat locally and customizing the frontend
- Deploying MooseStack to Boreal and the Next.js app to Vercel
Not in scope: Continuous data ingestion. This covers initial bulk load only. For recurring ingestion, see MooseStack Workflows.
Setup
Install Moose CLI
bash -i <(curl -fsSL https://fiveonefour.com/install.sh) mooseInitialize your project locally, and install dependencies
moose init <project-name> typescript-mcp # initialize your project
cd <project-name>
pnpm install # install dependenciesCreate and configure .env files
Create .env files
cp packages/moosestack-service/.env.{example,local}
cp packages/web-app/.env.{example,local}Create API Key authentication tokens:
cd packages/moosestack-service
moose generate hash-token # use output for the API Key & Token belowSet environment variables for API:
- Set your API Key in
packages/moosestack-service/.env.localto the ENV API KEY generated bymoose generate hash-token - Set your API Token in
packages/web-app/.env.localto the Bearer Token generated bymoose generate hash-token
If you want to use the chat in the application, (which will have MCP based access to the local ClickHouse managed by MooseDev), make sure to set your Anthropic key as an environment variable
echo "ANTHROPIC_API_KEY=your_api_key_here" >> packages/web-app/.env.localIf you don’t have an Anthropic API key, you can get one here: Claude Console
Local Development
Your demo app is now set up! Although it is not yet populated with data. Get it running locally.
Make sure docker desktop is running
docker versionRun the whole stack
From the root of your project, run:
pnpm dev # run the MooseStack and web app servicesAlternatively, run MooseStack and your frontend separately
From the root of your project, run:
pnpm dev:moose # Start MooseStack service only
pnpm dev:web # Start web app onlyYou can also run moose dev from your moose project directory /packages/moosestack-service.
Your web-application is available at http://localhost:3000. Don’t be alarmed!
This is a blank canvas for you to use to build whatever user facing analytics you want to. And the chat’s already configured, click the chat icon on the bottom right to open your chat. Our engineer GRoy put some work into that chat panel, and you’ll find that it is resizable, has differentiated scrolling, and more! Feel free to use and modify how you like.
No data yet
The chat will be functional, but since you've not added any data to your MooseStack project, the tools available in the chat won't be able to retrieve any data. The next section covers exploring and modeling the data.
(Optional) Start up MooseDev MCP with your copilot
Once your MooseDev Server is up and running, your development MCP server is ready to be connected to your copilot. For documentation for each copilot, see docs. By way of example, here's how you configure Claude Code:
claude mcp add --transport http moose-dev http://localhost:4000/mcpNote, you may have to do this before you start Claude Code or your IDE (or restart your copilot for the MCP server to be picked up). You can validate that it is working with the /mcp slash command.
(Optional) Add Context7 for MooseStack documentation
Context7 can serve up-to-date MooseStack docs to your copilot.
Setup: Install Context7 for your IDE. You may have to do this before you start Claude Code or your IDE (or restart your copilot for the MCP server to be picked up).
Usage: When you refer to MooseStack documentation in prompts, add use context7 for better results.
Model your data
The following steps cover how to get data from a source, model that data in MooseStack, creating the relevant ClickHouse tables and other infrastructure, and then loading the data into ClickHouse (either local or Boreal hosted).
You can model your data manually or you can generate a data model from your data. This guide will walk through the generative approach. This guide assumes you have direct access to the files in S3.
If the files are relatively small, just build up a packages/moosestack-service/context/ directory to gather context (you should gitignore your context directory), e.g.:
Copy data from S3
There are many ways to copy data down from s3, e.g.:
Using the S3 CLI (once you’ve authenticated):
aws s3 ls s3://source-data/ # list files in S3
aws s3 cp s3://source-data/ . --recursive # copy data from S3 to context directoryUsing moose query (this only copies one file at a time):
cd packages/moosestack-service # navigate to your MooseStack project
moose query "Select * FROM s3('s3://source-data/file-name.parquet', '<Access-Key-ID>', '<Secret-Access-Key>', 'parquet') limit 10;" > context/data/file-name.txt # copy data retrieved from that SQL query to the .txt fileNeed a dataset to play with?
The Amazon Customer Reviews dataset has 150M+ product reviews in Parquet on S3. No authentication required:
aws s3 cp \
s3://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/ \
packages/moosestack-service/context/data/ \
--recursive --no-sign-requestOr query directly:
moose query "
SELECT * FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/amazon_reviews_2015.snappy.parquet'
) LIMIT 10;
" > packages/moosestack-service/context/data/amazon-reviews-sample.txtAdd other context relevant for data modeling
OLAP, and especially ClickHouse, benefits from rigorous data modeling. LLMs aren’t perfect at understanding the nuance of OLAP data modeling, so here's a reference a guide: https://github.com/514-labs/olap-agent-ref (you can clone it into the context/rules directory).
cd packages/moosestack-service/context/rules
gh repo clone 514-labs/olap-agent-ref . # the gh CLI has less trouble with nested reposModel your data manually or with your copilot
Create
- A data model object defining the data model of your data to be ingested
- An OlapTable object, declaring that table in MooseStack
- A reference in index at the root of the MooseStack project (so that MooseOLAP will create the table.
Do this sequentially, e.g. with the following prompt pattern:
'<project-name>/packages/moosestack-service/app/ingest/models.ts' look at this file. It does two things, declares an interface "DataEvent", and then creates an IngestPipeline that declares a table, streaming topic and ingest API for that interface. I want to do this for the data that I'm ingesting, that I've extracted samples of to this directory: '<project-name>/packages/moosestack-service/context/data'. Let's go step by step. First, let's create the DataModel interface. Refer to the data sample here: '<project-name>/packages/moosestack-service/context/data/sample1.parquet', and the data dictionary here: '<project-name>/packages/moosestack-service/context/data/data_dictionary.csv' Use good OLAP data modeling practices (tight typing, LowCardinality where appropriate, etc). MooseStack data modeling docs are here:https://docs.fiveonefour.com/moosestack/data-modelingSupported types: https://docs.fiveonefour.com/moosestack/supported-types Then, create the OlapTable object.Make sure to then prompt the copilot to export the object to the it to moosestack-service/index.ts too:
'<project-name>/packages/moosestack-service/app/index.ts' make sure the above is exported in the indexVerify the tables were created correctly
The dev server should then pick up the new table, and the copilot should be able to confirm this with the MooseDev MCP, or by being prompted to use moose query:
Ensure the table was created by using `moose query` from the moosestack service directoryOr you can do this manually:
moose query "SELECT name, engine, total_rows FROM
system.tables WHERE database = currentDatabase();"It is also good practice to ask the copilot to double check the model generated against the sample data (LLMs can make assumptions about types of data, even when presented with sample data):
validate the inferred types against the sample data in context/data - check for type mismatches or incorrect assumptionsRepeat this for each table you want to model.
Bulk add data locally
Create a SQL file to load up your data from your remote source to your local ClickHouse:
--load-data.sqlINSERT INTO `local`.PlayerActivity SELECT * FROM s3( 's3://source-data/file.parquet','<Access-Key-ID>', '<Secret-Access-Key>' 'Parquet' );Make sure to properly apply any transformations to conform your S3 data to the data model you’ve created. ClickHouse will do many of these transformations naturally. Notably though:
- Column renamings will have to be done in SQL
- Default values in CH only set the value where the column is omitted in an insert. If the column was “null” in the source, you will have to cast that in the insert.
Execute the SQL query to load data into your local ClickHouse:
moose query -f load-data.sqlIt will return:
Node Id: moosestack-service::2dadb086-d9fb-4e42-9462-680185fac0ef Query 0 rowsThis is expected. The query doesn’t return any rows of data to the query client. To validate that it worked, use this query:
moose query "SELECT COUNT(*) FROM
\`local\`.tablename"You will now have a ready to test and iterate on local development environment: with your local MooseStack up and running, with local ClickHouse populated with real data, and the front-end ready to test.
Test and extend your frontend application
Chat with your data
Just go to http://localhost:3000: everything should be good to go. Chat away!
System prompt
To customize your chat's system prompt, edit packages/web-app/src/features/chat/system-prompt.ts.
Create custom API endpoints and a custom front-end
There’s a bootstrapped Next.js application, and an Express app that you can use to add to your framework.
Example prompt that references a ShadCN component, Express/MooseStack docs, and your app folder:
I want to add a Data Table component from ShadCN (here: https://ui.shadcn.com/docs/components/data-table.md) to the web app here: <project-name>/packages/web-app. I want it to serve data from DataModel here: <project-name>/packages/moosestack-service/app/ingest/models.ts. Please create an Express API in this directory to serve this component: <project-name>/packages/moosestack-service/app/apis Reference docs:- Express API docs: https://docs.fiveonefour.com/moose/app-api-frameworks/express/llm-ts.txt Examples:- cdp-analytics routes: https://github.com/514-labs/moose/tree/main/examples/cdp-analytics/packages/moosestack-service/app/apis- cdp-analytics services: https://github.com/514-labs/moose/tree/main/examples/cdp-analytics/packages/moosestack-service/app/services- fastify-moose router: https://github.com/514-labs/moose/blob/main/examples/fastify-moose/src/router.ts- fastify-moose controllers: https://github.com/514-labs/moose/tree/main/examples/fastify-moose/src/controller Follow Express best practices: separate routing from business logic, keep route handlers thin, maintain type safety throughout.You should see your frontend application update here http://localhost:3000.
Deploy to Boreal and Vercel
We'll prepare the application for deployment by setting up authentication, then deploy the MooseStack application to Boreal, and the web-app to Vercel. You can set up authentication and hosting to your preferences, if we haven't covered your preferred option in our docs, reach out in our slack community: https://slack.moosestack.com/.
Authentication was covered in the start of this guide, but covers back end to front end authentication only (you should also add authentication to restrict access to your frontend).
Deploy MooseStack to Boreal
Go to boreal.cloud, and set up an account. Link your Github account. Create an organization. Create a new project:
Select import a MooseStack Project:
Set the path to the root of the MooseStack service:
packages/moosestack-serviceAnd set the Environment Variables used in the project (following the same steps defined above):
MOOSE_INGEST_API_KEYMOOSE_CONSUMPTION_API_KEYMOOSE_ADMIN_TOKENMOOSE_WEB_APP_API_KEYSContinue, selecting Boreal default hosting (or point it at your own managed ClickHouse and Redpanda instances if you like).
Click Deploy, and you'll see your application MooseStack being deployed.
Deploy your Next.js frontend to Vercel
Authentication
The authentication set up above just ensures your backend and frontend can communicate securely. You still need to add authentication to your frontend to restrict access to your data.
Vercel offers this natively for preview deployments. For production, consider NextAuth, Auth0, or Clerk.
See Vercel docs:
Deployment
- From the Vercel home-page, add a new project.
- Point it at this project, and set the root directory to packages/web-app
- Set the following environment variables
ANTHROPIC_API_KEY: Your Anthropic API key for chat functionalityNEXT_PUBLIC_API_URL: Your Boreal endpoint URL (e.g., `https://your-project.boreal.cloud`)API_KEY: Use the bearer token generated earlierYou can find your Boreal endpoint URL at the top of the project overview page:
Hydrate your production deployment
Your project is now deployed. You have a Vercel hosted frontend. You have a Boreal hosted backend, with tables, APIs etc. set up for you.
Your backend, however, is still unpopulated.
This section will cover how to get data into prod.
Note, it assumes a bulk ingest from a Parquet file on S3 with direct insertion through SQL, like the rest of this tutorial. If you configured a recurring workflow, that would automate data ingest (depending on the trigger). If you set up an ingestion endpoint, you may need to send data to said endpoint.
Find your Boreal connection string / database details:
It is in the Database tab of your deployment overview:
Make sure to select the appropriate connection string type:
Connect your SQL client, and run the following ClickHouse SQL query:
-- Bulk load Parquet data from S3 into ClickHouse INSERT INTO `<clickhouse-database>`.`<table-name>` -- Target ClickHouse database and tableSELECT *FROM s3( 's3://<bucket-name>/<path-to-file>.parquet', -- S3 bucket and path to the Parquet file '<aws-access-key-id>', -- AWS Access Key ID '<aws-secret-access-key>', -- AWS Secret Access Key 'Parquet' -- File format);This will again return 0 rows. This is expected. You can validate that the transfer worked correctly as follows:
"SELECT COUNT(*) FROM `<clickhouse-database>`.tablename"Troubleshooting
Tutorial: Adding Chat to Your Existing Next.js Application
Assumptions
- You have a monorepo
- Your application already has a Next.js service
- Your application already has a MooseStack service, or you are willing to create one
- You are using Express for your APIs (other frameworks work, but aren't covered here)
Prerequisites
- GitHub account
- Claude API key
- Boreal account
- Vercel account
- Node.js v20+
- pnpm v8+
- Docker Desktop (must be running)
- Moose CLI
- Recommended: a copilot like Claude Code, Cursor, or Codex
Source template: All code snippets reference github.com/514-labs/moosestack/tree/main/templates/typescript-mcp
Architecture & Scope
This tutorial adds a chat panel to your existing Next.js app that queries your ClickHouse data using natural language. The chat uses Claude to interpret questions and call MCP tools that execute SQL queries against your database.
What you'll build:
| Component | Description |
|---|---|
| Chat UI | Resizable panel with message input/output, tool result rendering |
| API Route | Handles chat requests, streams responses from Claude |
| MCP Server | Exposes query_clickhouse and get_data_catalog tools |
| Authentication | Bearer token flow between frontend and backend |
| Production deployment | Backend on Boreal, frontend on Vercel |
What this covers:
- Adding MooseStack service with MCP server to your monorepo
- Integrating chat UI components into your Next.js app
- Configuring authentication between frontend and backend
- Testing and deploying the integrated system
Setup
Backend Setup: MooseStack Service with MCP Server
Already have a MooseStack project?
Backend Setup: Add the MooseStack service to your project
Pull the moosestack-service from the typescript-mcp template. It includes the MCP server (with query_clickhouse and get_data_catalog tools), a sample DataEvent data model, and all necessary dependencies.
# From your monorepo root
mkdir -p packages
cd packages
### Add the moosestack-service to your workspace
pnpm dlx tiged 514-labs/moosestack/templates/typescript-mcp/packages/moosestack-service ./moosestack-serviceReplace the sample data model
The template includes a sample DataEvent model. Replace it with your own data models using the AI-assisted approach above.
Backend Setup: Add moosestack-service to your workspace
If you don't have a pnpm-workspace.yaml at your monorepo root, create one:
packages: - 'packages/*'Install dependencies
cd packages/moosestack-service
pnpm installConfigure environment variables
cp .env.example .env.localGenerate an API key for authentication:
moose generate hash-tokenThis outputs two values:
- ENV API KEY → Put this in
packages/moosestack-service/.env.localasMCP_API_KEY - Bearer Token → Save this for your frontend config (see Frontend Setup)
Your .env.local should look like:
MCP_API_KEY=<paste-the-ENV-API-KEY-here>Add dev scripts to your root package.json
{ "scripts": { "dev": "pnpm --parallel --stream -r dev", "dev:moose": "pnpm --filter moosestack-service dev", "dev:web": "pnpm --filter web-app dev" }, "pnpm": { "onlyBuiltDependencies": [ "@confluentinc/kafka-javascript", "@514labs/kafka-javascript" ] }}Frontend Setup
shadcn/ui setup
If you don't have shadcn/ui set up, you'll also need the UI components. See ui.shadcn.com/docs/installation.
Install required packages
cd packages/web-app # or your Next.js app directory
pnpm add ai@5.0.106 @ai-sdk/anthropic@2.0.53 @ai-sdk/mcp@0.0.7 @ai-sdk/react@2.0.106
pnpm add react-resizable-panels@^3.0.6 react-markdown@^10.1.0 remark-gfm@^4.0.1
pnpm add lucide-react@^0.552.0Add these variables to your .env.local:
# From https://console.anthropic.com/ANTHROPIC_API_KEY=<your-anthropic-api-key> # Output from `moose generate hash-token`MCP_API_TOKEN=<the-bearer-token> # http://localhost:4000 for local devMCP_SERVER_URL=http://localhost:4000Add environment variable helper
Sample prompt:
Create a TypeScript module that exports helper functions for accessing MCP_SERVER_URL and ANTHROPIC_API_KEY environment variables, throwing descriptive errors if they're not set.Alternatively, manually add if you prefer:
// src/env-vars.tsexport function getMcpServerUrl(): string { const value = process.env.MCP_SERVER_URL; if (!value) { throw new Error("MCP_SERVER_URL environment variable is not set"); } return value;} export function getAnthropicApiKey(): string { const value = process.env.ANTHROPIC_API_KEY; if (!value) { throw new Error("ANTHROPIC_API_KEY environment variable is not set"); } return value;}Create .env.development for defaults
MCP_SERVER_URL=http://localhost:4000API Routes
The chat API route handles incoming messages, creates an MCP client to connect to your MooseStack server, and streams the AI response back to the frontend.
Add the chat feature to your Next.js app.
Implement the chat feature modules from the template's chat feature directory:
agent-config.ts- Model configuration and MCP client setupget-agent-response.ts- Core logic for streaming AI responses via MCPsystem-prompt.ts- System prompt for the chat agent
Sample prompt:
Implement a chat feature module for my Next.js app based on the files in https://github.com/514-labs/moosestack/tree/main/templates/typescript-mcp/packages/web-app/src/features/chat. Create agent-config.ts, get-agent-response.ts, and system-prompt.ts in src/features/chat/.Add the API route
Create a Next.js API route at src/app/api/chat/route.ts that handles POST requests, validates the messages array, and streams responses via getAgentResponse. See the template's route.ts for reference.
Sample prompt:
Create a Next.js API route at src/app/api/chat/route.ts that handles POST requests with a messages array, validates input, calls getAgentResponse from @/features/chat/get-agent-response, and returns proper error responses. Reference: https://github.com/514-labs/moosestack/blob/main/templates/typescript-mcp/packages/web-app/src/app/api/chat/route.tsFrontend Setup (Optional): Add status endpoint**
Create src/app/api/chat/status/route.ts to check if the Anthropic key is configured. See the template's status route for reference.
Sample prompt:
Create a Next.js API route at src/app/api/chat/status/route.ts that returns JSON indicating whether ANTHROPIC_API_KEY is configured. Reference: https://github.com/514-labs/moosestack/blob/main/templates/typescript-mcp/packages/web-app/src/app/api/chat/status/route.tsSystem prompt
To customize your chat's system prompt, edit src/features/chat/system-prompt.ts.
Frontend Setup: UI Components
The chat UI consists of several components. Rather than copying each individually, grab the entire chat feature folder and the required layout components.
Add chat UI components
Implement the chat UI components from the template's chat feature directory. Key components include:
chat-ui.tsx- Main chat interfacechat-input.tsx,chat-output-area.tsx- Input and message displaytool-invocation.tsx,clickhouse-tool-invocation.tsx,tool-data-catalog.tsx- Tool result renderingcode-block.tsx,text-formatter.tsx- Content formatting
Sample prompt:
Implement chat UI components for my Next.js app based on the files in https://github.com/514-labs/moosestack/tree/main/templates/typescript-mcp/packages/web-app/src/features/chat. Create all .tsx components in src/features/chat/.Add layout components
Implement the layout components from the template's layout directory:
resizable-chat-layout.tsx- Resizable panel layoutchat-layout-wrapper.tsx- Chat layout wrappercontent-header.tsx- Header component
Sample prompt:
Implement layout components for my Next.js app based on the files in https://github.com/514-labs/moosestack/tree/main/templates/typescript-mcp/packages/web-app/src/components/layout. Create resizable-chat-layout.tsx, chat-layout-wrapper.tsx, and content-header.tsx in src/components/layout/.Add required shadcn/ui components
The chat UI uses these shadcn/ui components:
npx shadcn@latest add button textarea scroll-area resizable collapsible badgeAdd the cn utility
If you don't have src/lib/utils.ts, add the standard Tailwind class merge utility:
Sample prompt:
Create src/lib/utils.ts with a cn() function that combines clsx and tailwind-merge for conditional class name handling.Update your root layout
Wrap your app with the ChatLayoutWrapper in src/app/layout.tsx. See the template's layout.tsx for reference.
Sample prompt:
Update my src/app/layout.tsx to wrap the app with ChatLayoutWrapper from @/components/layout/chat-layout-wrapper. If using next-themes or another theme provider, wrap ChatLayoutWrapper inside it. Reference: https://github.com/514-labs/moosestack/blob/main/templates/typescript-mcp/packages/web-app/src/app/layout.tsxTheme providers
If you're using next-themes or another theme provider, wrap ChatLayoutWrapper inside it, not the other way around.
Integration & Testing
Start both services
# Terminal 1: MooseStack
pnpm dev:moose
# Terminal 2: Next.js
pnpm dev:webAlternatively, run both with pnpm dev.
Verify the MCP server is running
curl http://localhost:4000/tools \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-mcp-api-token>" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'You should see a response listing query_clickhouse and get_data_catalog tools.
Test the chat
- Open your app at
http://localhost:3000 - Click the chat button (floating button in bottom-right)
- Ask a question like "What tables are available?"
- The AI should call
get_data_catalogand show you the results
Deploy to Boreal and Vercel
We'll prepare the application for deployment by setting up authentication, then deploy the MooseStack application to Boreal, and the web-app to Vercel. You can set up authentication and hosting to your preferences, if we haven't covered your preferred option in our docs, reach out in our slack community: https://slack.moosestack.com/.
Authentication was covered in the start of this guide, but covers back end to front end authentication only (you should also add authentication to restrict access to your frontend).
Deploy MooseStack to Boreal
Go to boreal.cloud, and set up an account. Link your Github account. Create an organization. Create a new project:
Select import a MooseStack Project:
Set the path to the root of the MooseStack service:
packages/moosestack-serviceAnd set the Environment Variables used in the project (following the same steps defined above):
MOOSE_INGEST_API_KEYMOOSE_CONSUMPTION_API_KEYMOOSE_ADMIN_TOKENMOOSE_WEB_APP_API_KEYSContinue, selecting Boreal default hosting (or point it at your own managed ClickHouse and Redpanda instances if you like).
Click Deploy, and you'll see your application MooseStack being deployed.
Deploy your Next.js frontend to Vercel
Authentication
The authentication set up above just ensures your backend and frontend can communicate securely. You still need to add authentication to your frontend to restrict access to your data.
Vercel offers this natively for preview deployments. For production, consider NextAuth, Auth0, or Clerk.
See Vercel docs:
Deployment
- From the Vercel home-page, add a new project.
- Point it at this project, and set the root directory to packages/web-app
- Set the following environment variables
ANTHROPIC_API_KEY: Your Anthropic API key for chat functionalityNEXT_PUBLIC_API_URL: Your Boreal endpoint URL (e.g., `https://your-project.boreal.cloud`)API_KEY: Use the bearer token generated earlierYou can find your Boreal endpoint URL at the top of the project overview page:
Troubleshooting
Appendix: Data context as code
Clickhouse allows you to embed table and column level metadata.
With MooseOlap, you can set these table and column level descriptions in your data models. e.g.
export interface AircraftTrackingData { /** Unique aircraft identifier */ hex: string; // no comment for this column transponder_type: string; /** callsign, the flight name or aircraft registration as 8 chars */ flight: string; /** aircraft registration pulled from database */ r: string; /** unique aircraft identifier */ aircraft_type?: string; /** bitfield for certain database flags */ dbFlags: number;}The /** JSDoc */ comments on the column level will now be embedded in your ClickHouse database. This additional context will be available to your chat, and retrievable with the same tool-calls that retrieve data.