The Moose development server includes a built-in Model Context Protocol (MCP) server that enables AI agents and IDEs to interact directly with your local development infrastructure. This allows you to use natural language to query data, inspect logs, explore infrastructure, and debug your Moose project.
Model Context Protocol (MCP) is an open protocol that standardizes how AI assistants communicate with development tools and services. Moose's MCP server exposes your local development environment—including ClickHouse, Redpanda, logs, and infrastructure state—through a set of tools that AI agents can use.
The MCP server runs automatically when you start development mode:
moose devThe MCP server is available at: http://localhost:4000/mcp
The MCP server is enabled by default. To disable it, use moose dev --mcp=false.
Connect your AI assistant to the Moose MCP server. Most clients now support native HTTP transport for easier setup.
Setup: Use the Claude Code CLI (easiest method)
claude mcp add --transport http moose-dev http://localhost:4000/mcpThat's it! Claude Code will automatically connect to your Moose dev server.
Scope: This command adds the MCP server to Claude Code's project configuration, making it available to your project when using Claude Code. Other AI clients (Cursor, Windsurf, etc.) require separate configuration - see the tabs below.
Make sure moose dev is running before adding the server. The CLI will verify the connection.
Alternative: Manual configuration at ~/.claude/config.json
{ "mcpServers": { "moose-dev": { "transport": "http", "url": "http://localhost:4000/mcp" } }}Make sure moose dev is running before using the MCP tools. The AI client will connect to http://localhost:4000/mcp.
The Moose MCP server provides five tools for interacting with your local development environment. Start with get_infra_map to understand your project structure, then use the other tools to query data, inspect streams, check logs, and diagnose issues.
Get complete project topology showing all components with source file locations and data flow connections. This should be the first tool used.
What you get:
Optional search parameter:
search="user" finds UserEvents, user_table, INGRESS_UserExample prompts:
"Show me the infrastructure map"
components[64]{id,type,name,source_file}: Bar,topic,Bar,app/ingest/models.ts local_Bar,table,Bar,app/ingest/models.ts INGRESS_Foo,api_endpoint,POST ingest/Foo,app/ingest/models.ts Foo__Bar,function,Foo__Bar,"" BarAggregated_MV,sql_resource,BarAggregated_MV,"" Bar_local_Bar,topic_table_sync,"Bar -> local_Bar",app/ingest/models.ts ... connections[35]{from,to,type}: INGRESS_Foo,Foo,produces Foo,Foo__Bar,transforms Foo__Bar,Bar,produces Bar,Bar_local_Bar,ingests Bar_local_Bar,local_Bar,ingests ... stats: total_components: 64 by_type: {topic: 11, table: 28, api_endpoint: 11, ...}"Show me components related to Bar"
components[8]{id,type,name,source_file}: Bar,topic,Bar,app/ingest/models.ts local_Bar,table,Bar,app/ingest/models.ts local_BarAggregated,table,BarAggregated,app/views/barAggregated.ts EGRESS_bar,api_endpoint,GET bar,app/apis/bar.ts Foo__Bar,function,Foo__Bar,"" BarAggregated_MV,sql_resource,BarAggregated_MV,"" Bar_local_Bar,topic_table_sync,"Bar -> local_Bar",app/ingest/models.ts ... Search: 'Bar' | Matched: 8 component(s), 6 connection(s)Output format: Compact TOON table format maximizes readability while minimizing token usage. Component IDs can be used to construct resource URIs: moose://infra/{type}s/{id}
Use cases:
Retrieve and filter Moose development server logs for debugging and monitoring.
What you can ask for:
Example prompts:
"Show me the last 10 ERROR logs"
Showing 10 most recent log entries from /Users/user/.moose/2025-10-10-cli.logFilters applied: - Level: ERROR [2025-10-10T17:44:42Z ERROR] Foo -> Bar (worker 1): Unsupported SASL mechanism: undefined[2025-10-10T17:44:43Z ERROR] FooDeadLetterQueue (consumer) (worker 1): Unsupported SASL mechanism[2025-10-10T17:51:48Z ERROR] server error on API server (port 4000): connection closed..."What WARN level logs do I have?"
Showing 6 most recent log entriesFilters applied: - Level: WARN [2025-10-10T16:45:04Z WARN] HTTP client not configured - missing API_KEY[2025-10-10T16:50:05Z WARN] HTTP client not configured - missing API_KEY...Tip: Combine filters for better results. For example: "Show me ERROR logs with 'ClickHouse' in them" combines level filtering with search.
Use cases:
Execute read-only SQL queries against your local ClickHouse database.
What you can ask for:
Example prompts:
"What columns are in the UserEvents_1_0 table?"
Query executed successfully. Rows returned: 4 name type default_type default_expression commentuserId StringeventType Stringtimestamp Float64metadata Nullable(String)"List all tables and their engines"
Query executed successfully. Rows returned: 29 name engineBar MergeTreeBasicTypes MergeTreeUserEvents_1_0 MergeTreeUserEvents_2_0 ReplacingMergeTreeReplicatedMergeTreeTest ReplicatedMergeTreeBarAggregated_MV MaterializedView..."Count the number of rows in Bar"
Query executed successfully. Rows returned: 1 total_rows0Tip: Ask the AI to discover table names first using "What tables exist in my project?" before querying them. Table names are case-sensitive in ClickHouse.
Use cases:
Safety: Only read-only operations are permitted (SELECT, SHOW, DESCRIBE, EXPLAIN). Write operations (INSERT, UPDATE, DELETE) and DDL statements (CREATE, ALTER, DROP) are blocked.
Sample recent messages from Kafka/Redpanda streaming topics.
What you can ask for:
Example prompts:
"Sample 5 messages from the Bar topic"
{ "stream_name": "Bar", "message_count": 5, "partition_count": 1, "messages": [ { "primaryKey": "e90c93be-d28b-47d6-b783-5725655c044f", "utcTimestamp": "+057480-11-24T20:39:59.000Z", "hasText": true, "textLength": 107 }, { "primaryKey": "b974f830-f28a-4a95-b61c-f65bfc607795", "utcTimestamp": "+057370-11-04T17:11:51.000Z", "hasText": true, "textLength": 166 }, ... ]}"What data is flowing through the BasicTypes stream?" (pretty format)
# Stream Sample: BasicTypes Retrieved 3 message(s) from 1 partition(s) Message 1{ "id": "bt-001", "timestamp": "2024-10-09T12:00:00Z", "stringField": "hello world", "numberField": 42, "booleanField": true} Message 2{ "id": "bt-002", "timestamp": "2024-10-09T12:05:00Z", "stringField": "test", "numberField": 100, "booleanField": false}...Tip: Use "List all streaming topics" first to discover available streams in your project.
Use cases:
Proactively scan your Moose project for health issues and operational problems.
What it checks:
Example prompts:
"Check for any infrastructure issues"
infrastructure_type: clickhouseissues[2]{severity,component_type,component_name,issue_type,message}: error,table,UserEvents,stuck_mutation,"Mutation 0000000003 stuck for 2h 15m" warning,table,ReplicatedTable,replication_lag,"Replication lag: 15s behind leader" summary: total_issues: 2 by_severity: {error: 1, warning: 1}"Any problems with my tables?"
infrastructure_type: clickhouseissues[0]:summary: total_issues: 0 by_severity: {} by_component: {}Tip: Run this when troubleshooting errors or performance issues. It automatically checks relevant diagnostics based on your infrastructure configuration.
Use cases:
You: "Can you explain this Moose project structure?" AI: [Uses get_infra_map to see topology] AI: This project has 11 topics, 28 tables, 11 API endpoints, and 3 functions.Data flows like this:- INGRESS APIs send data to topics (e.g., INGRESS_Foo → Foo topic)- Functions transform between topics (Foo → Foo__Bar → Bar)- Topics sync to tables (Bar_local_Bar sync → local_Bar table)- SQL resources create views (BarAggregated_MV) All components are defined in app/ingest/models.ts and app/functions/.You: "My users table is empty, can you help me debug?" AI: Let me check the infrastructure and data flow.[Uses get_infra_map search="user" to see the pipeline][Uses get_issues to check for infrastructure problems][Uses get_logs level="ERROR" to check for errors][Uses query_olap to verify table is empty][Uses get_stream_sample to check if data is in the source stream] AI: I found the issue - there are errors in the logs showing the streamingfunction is failing. The stream has data, but it's not reaching the table.You: "Did my streaming function correctly transform the user data?" AI: [Uses get_stream_sample on source stream][Uses query_olap on destination table] AI: Yes, the transformation is working. I compared the source streammessages with the destination table records, and the enrichmentfields are being added correctly.If you're running Moose on a non-default port, update the MCP configuration:
export MOOSE_HTTP_PORT=8080
moose devThen update your MCP client configuration to use port 8080 instead of 4000.
To run development mode without the MCP server:
moose dev --mcp=falseThe MCP server is designed for local development only. It provides direct access to your infrastructure and should never be exposed in production environments.
The MCP server:
moose dev)moose prod)For general MooseStack documentation and code examples, use Context7 MCP.
Usage: Add use context7 to prompts about MooseStack (or create a rule in your workspace to automatically use context7 for anything MooseStack):
Show me how to create a streaming function in MooseStack using TypeScript. use context7Best together: Context7 provides MooseStack framework documentation, while the Moose MCP server provides access to your local project's data, logs, and infrastructure.
moose dev is running: curl http://localhost:4000/mcpmoose logs --filter mcpIf your AI client can't connect to the MCP server:
# Check if the dev server is running
curl http://localhost:4000/health
# Check MCP endpoint specifically
curl -X POST http://localhost:4000/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize"}'If tools return no data:
moose lsmoose peek <table_name>