Observability
This page consolidates Moose observability for both local development and production environments.
Local Development
Metrics Console
Moose provides a console to view live metrics from your Moose application. To launch the console, run:
moose metrics
Use the arrow keys to move up and down rows in the endpoint table and press Enter to view more details about that endpoint.
Endpoint Metrics
Aggregated metrics for all endpoints:
Metric | Description |
---|---|
AVERAGE LATENCY | Average time in milliseconds it takes for a request to be processed by the endpoint |
TOTAL # OF REQUESTS | Total number of requests made to the endpoint |
REQUESTS PER SECOND | Average number of requests made per second to the endpoint |
DATA IN | Average number of bytes of data sent to all /ingest endpoints per second |
DATA OUT | Average number of bytes of data sent to all /api endpoints per second |
Individual endpoint metrics:
Metric | Description |
---|---|
LATENCY | Average time in milliseconds it takes for a request to be processed by the endpoint |
# OF REQUESTS RECEIVED | Total number of requests made to the endpoint |
# OF MESSAGES SENT TO KAFKA | Total number of messages sent to the Kafka topic |
Stream → Table Sync Metrics
Metric | Description |
---|---|
MSG READ | Total number of messages sent from /ingest API endpoint to the Kafka topic |
LAG | The number of messages that have been sent to the consumer but not yet received |
MSG/SEC | Average number of messages sent from /ingest API endpoint to the Kafka topic per second |
BYTES/SEC | Average number of bytes of data received by the ClickHouse consumer from the Kafka topic per second |
Streaming Transformation Metrics
For each streaming transformation:
Metric | Description |
---|---|
MSG IN | Total number of messages passed into the streaming function |
MSG IN/SEC | Average number of messages passed into the streaming function per second |
MSG OUT | Total number of messages returned by the streaming function |
MSG OUT/SEC | Average number of messages returned by the streaming function per second |
BYTES/SEC | Average number of bytes of data returned by the streaming function per second |
Production
Health Monitoring
Moose applications expose a health check endpoint at /health
that returns a 200 OK response when the application is operational. This endpoint is used by container orchestration systems like Kubernetes to determine the health of your application.
In production environments, we recommend configuring three types of probes:
- Startup Probe: Gives Moose time to initialize before receiving traffic
- Readiness Probe: Determines when the application is ready to receive traffic
- Liveness Probe: Detects when the application is in a deadlocked state and needs to be restarted
Read the Kubernetes Deployment Guide
Learn more about how to configure health checks in your Kubernetes deployment.
Prometheus Metrics
Moose applications expose metrics in Prometheus format at the /metrics
endpoint. These metrics include:
- HTTP request latency histograms for each endpoint
- Request counts and error rates
- System metrics for the Moose process
Example metrics output:
# HELP latency Latency of HTTP requests.
# TYPE latency histogram
latency_sum{method="POST",path="ingest/UserActivity"} 0.025
latency_count{method="POST",path="ingest/UserActivity"} 2
latency_bucket{le="0.001",method="POST",path="ingest/UserActivity"} 0
latency_bucket{le="0.01",method="POST",path="ingest/UserActivity"} 0
latency_bucket{le="0.02",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="0.05",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="0.1",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="0.25",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="0.5",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="1.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="5.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="10.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="30.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="60.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="120.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="240.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="+Inf",method="POST",path="ingest/UserActivity"} 1
You can scrape these metrics using a Prometheus server or any compatible monitoring system.
OpenTelemetry Integration
In production deployments, Moose can export telemetry data using OpenTelemetry. Enable via environment variables:
MOOSE_TELEMETRY__ENABLED=true
MOOSE_TELEMETRY__EXPORT_METRICS=true
When running in Kubernetes with an OpenTelemetry operator, you can configure automatic sidecar injection by adding annotations to your deployment:
metadata:
annotations:
"sidecar.opentelemetry.io/inject": "true"
Logging
Configure structured logging via environment variables:
MOOSE_LOGGER__LEVEL=Info
MOOSE_LOGGER__STDOUT=true
MOOSE_LOGGER__FORMAT=Json
The JSON format is ideal for log aggregation systems (ELK Stack, Graylog, Loki, or cloud logging solutions).
Production Monitoring Stack
Recommended components:
- Metrics Collection: Prometheus or cloud-native monitoring services
- Log Aggregation: ELK Stack, Loki, or cloud logging solutions
- Distributed Tracing: Jaeger or other OpenTelemetry-compatible backends
- Alerting: Alertmanager or cloud provider alerting
Error Tracking
Integrate with systems like Sentry via environment variables:
SENTRY_DSN=https://your-sentry-dsn
RUST_BACKTRACE=1
Managed Moose in Production
Want this managed in production for you? Check out Boreal Cloud (from the makers of the Moose Stack).