This page consolidates Moose observability for both local development and production environments.
Moose provides a console to view live metrics from your Moose application. To launch the console, run:
moose metricsUse the arrow keys to move up and down rows in the endpoint table and press Enter to view more details about that endpoint.
Aggregated metrics for all endpoints:
| Metric | Description |
|---|---|
AVERAGE LATENCY | Average time in milliseconds it takes for a request to be processed by the endpoint |
TOTAL # OF REQUESTS | Total number of requests made to the endpoint |
REQUESTS PER SECOND | Average number of requests made per second to the endpoint |
DATA IN | Average number of bytes of data sent to all /ingest endpoints per second |
DATA OUT |
Average number of bytes of data sent to all /api endpoints per second |
Individual endpoint metrics:
| Metric | Description |
|---|---|
LATENCY | Average time in milliseconds it takes for a request to be processed by the endpoint |
# OF REQUESTS RECEIVED | Total number of requests made to the endpoint |
# OF MESSAGES SENT TO KAFKA | Total number of messages sent to the Kafka topic |
| Metric | Description |
|---|---|
MSG READ | Total number of messages sent from /ingest API endpoint to the Kafka topic |
LAG | The number of messages that have been sent to the consumer but not yet received |
MSG/SEC | Average number of messages sent from /ingest API endpoint to the Kafka topic per second |
BYTES/SEC | Average number of bytes of data received by the ClickHouse consumer from the Kafka topic per second |
For each streaming transformation:
| Metric | Description |
|---|---|
MSG IN | Total number of messages passed into the streaming function |
MSG IN/SEC | Average number of messages passed into the streaming function per second |
MSG OUT | Total number of messages returned by the streaming function |
MSG OUT/SEC | Average number of messages returned by the streaming function per second |
BYTES/SEC | Average number of bytes of data returned by the streaming function per second |
Moose applications expose a health check endpoint at /health that returns a 200 OK response when the application is operational. This endpoint is used by container orchestration systems like Kubernetes to determine the health of your application.
In production environments, we recommend configuring three types of probes:
Moose applications expose metrics in Prometheus format at the /metrics endpoint. These metrics include:
Example metrics output:
# HELP latency Latency of HTTP requests.
# TYPE latency histogram
latency_sum{method="POST",path="ingest/UserActivity"} 0.025
latency_count{method="POST",path="ingest/UserActivity"} 2
latency_bucket{le="0.001",method="POST",path="ingest/UserActivity"} 0
latency_bucket{le="0.01",method="POST",path="ingest/UserActivity"} 0
latency_bucket{le="0.02",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="0.05",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="0.1",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="0.25",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="0.5",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="1.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="5.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="10.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="30.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="60.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="120.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="240.0",method="POST",path="ingest/UserActivity"} 1
latency_bucket{le="+Inf",method="POST",path="ingest/UserActivity"} 1
You can scrape these metrics using a Prometheus server or any compatible monitoring system.
In production deployments, Moose can export telemetry data using OpenTelemetry. Enable via environment variables:
MOOSE_TELEMETRY__ENABLED=true
MOOSE_TELEMETRY__EXPORT_METRICS=true
When running in Kubernetes with an OpenTelemetry operator, you can configure automatic sidecar injection by adding annotations to your deployment:
metadata: annotations: "sidecar.opentelemetry.io/inject": "true"Configure structured logging via environment variables:
MOOSE_LOGGER__LEVEL=Info
MOOSE_LOGGER__STDOUT=true
MOOSE_LOGGER__FORMAT=Json
The JSON format is ideal for log aggregation systems (ELK Stack, Graylog, Loki, or cloud logging solutions).
Recommended components:
Integrate with systems like Sentry via environment variables:
SENTRY_DSN=https://your-sentry-dsn
RUST_BACKTRACE=1