Deploying a Moose application with all its dependencies can be challenging and time-consuming. You need to properly configure multiple services, ensure they communicate with each other, and manage their lifecycle.
Docker Compose solves this problem by allowing you to deploy your entire stack with a single command.
This guide shows you how to set up a production-ready Moose environment on a single server using Docker Compose, with proper security, monitoring, and maintenance practices.
This guide describes a single-server deployment. For high availability (HA) deployments, you'll need to:
We are also offering an HA managed deployment option for Moose called Boreal.
Before you begin, you'll need:
The Moose stack consists of:
First, install Docker on your Ubuntu server:
# Update the apt package index
sudo apt-get update
# Install packages to allow apt to use a repository over HTTPS
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up the stable repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Update apt package index again
sudo apt-get update
# Install Docker Engine
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-pluginNext, install Node.js or Python depending on your Moose application:
# For Node.js applications
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
# OR for Python applications
sudo apt-get install -y python3.12 python3-pipTo prevent Docker logs from filling up your disk space, configure log rotation:
sudo mkdir -p /etc/docker
sudo vim /etc/docker/daemon.jsonAdd the following configuration:
{ "log-driver": "json-file", "log-opts": { "max-size": "100m", "max-file": "3" }}Restart Docker to apply the changes:
sudo systemctl restart dockerTo run Docker commands without sudo:
# Add your user to the docker group
sudo usermod -aG docker $USER
# Apply the changes (log out and back in, or run this)
newgrp dockerIf you want to set up CI/CD automation, you can install a GitHub Actions runner:
To configure the runner as a service (to run automatically):
cd actions-runner
sudo ./svc.sh install
sudo ./svc.sh startIf you already have a Moose application, you can skip this section.
You should copy the moose project to your server and then build the application with the --docker flag and get the built image
on the server.
bash -i <(curl -fsSL https://fiveonefour.com/install.sh) moosePlease follow the initialization instructions for your language.
moose init test-ts typescript
cd test-ts
npm installor
moose init test-py python
cd test-py
pip install -r requirements.txtmoose build --docker --amd64moose build --docker --arm64docker imagesFirst, create a file called .env in your project directory to specify component versions:
# Create and open the .env file
vim .envAdd the following content to the .env file:
# Version configuration for components
POSTGRESQL_VERSION=14.0
TEMPORAL_VERSION=1.22.0
TEMPORAL_UI_VERSION=2.20.0
REDIS_VERSION=7
CLICKHOUSE_VERSION=25.4
REDPANDA_VERSION=v24.3.13
REDPANDA_CONSOLE_VERSION=v3.1.0
Additionally, create a .env.prod file for your Moose application-specific secrets and configuration:
# Create and open the .env.prod file
vim .env.prodAdd your application-specific environment variables:
# Application-specific environment variables
APP_SECRET=your_app_secret
# Add other application variables here
Create a file called docker-compose.yml in the same directory:
# Create and open the docker-compose.yml file
vim docker-compose.ymlAdd the following content to the file:
name: moose-stackvolumes: # Required volumes clickhouse-0-data: null clickhouse-0-logs: null redis-0: null # Optional volumes redpanda-0: null postgresql-data: null configs: temporal-config: # Using the "content" property to inline the config content: | limit.maxIDLength: - value: 255 constraints: {} system.forceSearchAttributesCacheRefreshOnRead: - value: true # Dev setup only. Please don't turn this on in production. constraints: {} services: # REQUIRED SERVICES # ClickHouse - Required analytics database clickhouse-0: container_name: clickhouse-0 restart: always image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION} volumes: - clickhouse-0-data:/var/lib/clickhouse/ - clickhouse-0-logs:/var/log/clickhouse-server/ environment: # Enable SQL-driven access control and user management CLICKHOUSE_ALLOW_INTROSPECTION_FUNCTIONS: 1 # Default admin credentials CLICKHOUSE_USER: admin CLICKHOUSE_PASSWORD: adminpassword # Disable default user CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: 1 # Database setup CLICKHOUSE_DB: moose # Uncomment this if you want to access clickhouse from outside the docker network # ports: # - 8123:8123 # - 9000:9000 healthcheck: test: wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1 interval: 30s timeout: 5s retries: 3 start_period: 30s ulimits: nofile: soft: 262144 hard: 262144 networks: - moose-network # Redis - Required for caching and pub/sub redis-0: restart: always image: redis:${REDIS_VERSION} volumes: - redis-0:/data command: redis-server --save 20 1 --loglevel warning healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 10s timeout: 5s retries: 5 networks: - moose-network # OPTIONAL SERVICES # --- BEGIN REDPANDA SERVICES (OPTIONAL) --- # Remove this section if you don't need event streaming redpanda-0: restart: always command: - redpanda - start - --kafka-addr internal://0.0.0.0:9092,external://0.0.0.0:19092 # Address the broker advertises to clients that connect to the Kafka API. # Use the internal addresses to connect to the Redpanda brokers' # from inside the same Docker network. # Use the external addresses to connect to the Redpanda brokers' # from outside the Docker network. - --advertise-kafka-addr internal://redpanda-0:9092,external://localhost:19092 - --pandaproxy-addr internal://0.0.0.0:8082,external://0.0.0.0:18082 # Address the broker advertises to clients that connect to the HTTP Proxy. - --advertise-pandaproxy-addr internal://redpanda-0:8082,external://localhost:18082 - --schema-registry-addr internal://0.0.0.0:8081,external://0.0.0.0:18081 # Redpanda brokers use the RPC API to communicate with each other internally. - --rpc-addr redpanda-0:33145 - --advertise-rpc-addr redpanda-0:33145 # Mode dev-container uses well-known configuration properties for development in containers. - --mode dev-container # Tells Seastar (the framework Redpanda uses under the hood) to use 1 core on the system. - --smp 1 - --default-log-level=info image: docker.redpanda.com/redpandadata/redpanda:${REDPANDA_VERSION} container_name: redpanda-0 volumes: - redpanda-0:/var/lib/redpanda/data networks: - moose-network healthcheck: test: ["CMD-SHELL", "rpk cluster health | grep -q 'Healthy:.*true'"] interval: 30s timeout: 10s retries: 3 start_period: 30s # Optional Redpanda Console for visualizing the cluster redpanda-console: restart: always container_name: redpanda-console image: docker.redpanda.com/redpandadata/console:${REDPANDA_CONSOLE_VERSION} entrypoint: /bin/sh command: -c 'echo "$$CONSOLE_CONFIG_FILE" > /tmp/config.yml; /app/console' environment: CONFIG_FILEPATH: /tmp/config.yml CONSOLE_CONFIG_FILE: | kafka: brokers: ["redpanda-0:9092"] # Schema registry config moved outside of kafka section schemaRegistry: enabled: true urls: ["http://redpanda-0:8081"] redpanda: adminApi: enabled: true urls: ["http://redpanda-0:9644"] ports: - 8080:8080 depends_on: - redpanda-0 healthcheck: test: ["CMD", "wget", "--spider", "--quiet", "http://localhost:8080/admin/health"] interval: 30s timeout: 5s retries: 3 start_period: 10s networks: - moose-network # --- END REDPANDA SERVICES --- # --- BEGIN TEMPORAL SERVICES (OPTIONAL) --- # Remove this section if you don't need workflow orchestration # Temporal PostgreSQL database postgresql: container_name: temporal-postgresql environment: POSTGRES_PASSWORD: temporal POSTGRES_USER: temporal image: postgres:${POSTGRESQL_VERSION} restart: always volumes: - postgresql-data:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U temporal"] interval: 10s timeout: 5s retries: 3 networks: - moose-network # Temporal server # For initial setup, use temporalio/auto-setup image # For production, switch to temporalio/server after first run temporal: container_name: temporal depends_on: postgresql: condition: service_healthy environment: # Database configuration - DB=postgres12 - DB_PORT=5432 - POSTGRES_USER=temporal - POSTGRES_PWD=temporal - POSTGRES_SEEDS=postgresql # Namespace configuration - DEFAULT_NAMESPACE=moose-workflows - DEFAULT_NAMESPACE_RETENTION=72h # Auto-setup options - set to false after initial setup - AUTO_SETUP=true - SKIP_SCHEMA_SETUP=false # Service configuration - all services by default # For high-scale deployments, run these as separate containers # - SERVICES=history,matching,frontend,worker # Logging and metrics - LOG_LEVEL=info # Addresses - TEMPORAL_ADDRESS=temporal:7233 - DYNAMIC_CONFIG_FILE_PATH=/etc/temporal/config/dynamicconfig/development-sql.yaml # For initial deployment, use the auto-setup image image: temporalio/auto-setup:${TEMPORAL_VERSION} # For production, after initial setup, switch to server image: # image: temporalio/server:${TEMPORAL_VERSION} restart: always ports: - 7233:7233 # Volume for dynamic configuration - essential for production configs: - source: temporal-config target: /etc/temporal/config/dynamicconfig/development-sql.yaml mode: 0444 networks: - moose-network healthcheck: test: ["CMD", "tctl", "--ad", "temporal:7233", "cluster", "health", "|", "grep", "-q", "SERVING"] interval: 30s timeout: 5s retries: 3 start_period: 30s # Temporal Admin Tools - useful for maintenance and debugging temporal-admin-tools: container_name: temporal-admin-tools depends_on: - temporal environment: - TEMPORAL_ADDRESS=temporal:7233 - TEMPORAL_CLI_ADDRESS=temporal:7233 image: temporalio/admin-tools:${TEMPORAL_VERSION} restart: "no" networks: - moose-network stdin_open: true tty: true # Temporal Web UI temporal-ui: container_name: temporal-ui depends_on: - temporal environment: - TEMPORAL_ADDRESS=temporal:7233 - TEMPORAL_CORS_ORIGINS=http://localhost:3000 image: temporalio/ui:${TEMPORAL_UI_VERSION} restart: always ports: - 8081:8080 networks: - moose-network healthcheck: test: ["CMD", "wget", "--spider", "--quiet", "http://localhost:8080/health"] interval: 30s timeout: 5s retries: 3 start_period: 10s # --- END TEMPORAL SERVICES --- # Your Moose application moose: image: moose-df-deployment-x86_64-unknown-linux-gnu:latest # Update with your image name depends_on: # Required dependencies - clickhouse-0 - redis-0 # Optional dependencies - remove if not using - redpanda-0 - temporal restart: always environment: # Logging and debugging RUST_BACKTRACE: "1" MOOSE_LOGGER__LEVEL: "Info" MOOSE_LOGGER__STDOUT: "true" # Required services configuration # ClickHouse configuration MOOSE_CLICKHOUSE_CONFIG__DB_NAME: "moose" MOOSE_CLICKHOUSE_CONFIG__USER: "moose" MOOSE_CLICKHOUSE_CONFIG__PASSWORD: "your_moose_password" MOOSE_CLICKHOUSE_CONFIG__HOST: "clickhouse-0" MOOSE_CLICKHOUSE_CONFIG__HOST_PORT: "8123" # Redis configuration MOOSE_REDIS_CONFIG__URL: "redis://redis-0:6379" MOOSE_REDIS_CONFIG__KEY_PREFIX: "moose" # Optional services configuration # Redpanda configuration (remove if not using Redpanda) MOOSE_REDPANDA_CONFIG__BROKER: "redpanda-0:9092" MOOSE_REDPANDA_CONFIG__MESSAGE_TIMEOUT_MS: "1000" MOOSE_REDPANDA_CONFIG__RETENTION_MS: "30000" MOOSE_REDPANDA_CONFIG__NAMESPACE: "moose" # Temporal configuration (remove if not using Temporal) MOOSE_TEMPORAL_CONFIG__TEMPORAL_HOST: "temporal:7233" MOOSE_TEMPORAL_CONFIG__NAMESPACE: "moose-workflows" # HTTP Server configuration MOOSE_HTTP_SERVER_CONFIG__HOST: 0.0.0.0 ports: - 4000:4000 env_file: - path: ./.env.prod required: true networks: - moose-network healthcheck: test: ["CMD-SHELL", "curl -s http://localhost:4000/health | grep -q '\"unhealthy\": \\[\\]' && echo 'Healthy'"] interval: 30s timeout: 5s retries: 10 start_period: 60s # Define the network for all servicesnetworks: moose-network: driver: bridgeAt this point, don't start the services yet. First, we need to configure the individual services for production use as described in the following sections.
For production ClickHouse deployment, we'll use environment variables to configure users and access control (as recommended in the official Docker image documentation):
# Start just the ClickHouse container
docker compose up -d clickhouse-0# Connect to ClickHouse with the admin user
docker exec -it clickhouse-0 clickhouse-client --user admin --password adminpassword
# Create moose application user
CREATE USER moose IDENTIFIED BY 'your_moose_password';
GRANT ALL ON moose.* TO moose;
# Create read-only user for BI tools (optional)
CREATE USER power_bi IDENTIFIED BY 'your_powerbi_password' SETTINGS PROFILE 'readonly';
GRANT SHOW TABLES, SELECT ON moose.* TO power_bi;To exit the ClickHouse client, type \q and press Enter.
Update your Moose environment variables to use the new moose user:
vim docker-compose.ymlMOOSE_CLICKHOUSE_CONFIG__USER: "moose"MOOSE_CLICKHOUSE_CONFIG__PASSWORD: "your_moose_password"MOOSE_CLICKHOUSE_CONFIG__USER: "admin"MOOSE_CLICKHOUSE_CONFIG__PASSWORD: "adminpassword"For additional security in production, consider using Docker secrets for passwords.
Restart the ClickHouse container to apply the changes:
docker compose restart clickhouse-0# Connect with the new moose user
docker exec -it moose-stack-clickhouse-0-1 clickhouse-client --user moose --password your_moose_password
# Test access by listing tables
SHOW TABLES FROM moose;
# Exit the clickhouse client
\qIf you can connect successfully and run commands with the new user, your ClickHouse configuration is working properly.
For production, it's recommended to restrict external access to Redpanda:
Modify your Docker Compose file to remove external access:
For this simple deployment, we'll keep Redpanda closed to the external world with no authentication required, as it's only accessible from within the Docker network.
If your Moose application uses Temporal for workflow orchestration, the configuration above includes all necessary services based on the official Temporal Docker Compose examples.
If you're not using Temporal, simply remove the Temporal-related services (postgresql, temporal, temporal-ui) and environment variables from the docker-compose.yml file.
Deploying Temporal involves a two-phase process: initial setup followed by production operation. Here are step-by-step instructions for each phase:
docker compose up -d postgresqldocker compose ps postgresqlLook for healthy in the output before proceeding.
docker compose up -d temporalDuring this phase, Temporal's auto-setup will:
docker compose ps temporaldocker compose up -d temporal-admin-tools temporal-ui# Register the moose-workflows namespace with a 3-day retention period
docker compose exec temporal-admin-tools tctl namespace register --retention 72h moose-workflowsVerify that the namespace was created:
# List all namespaces
docker compose exec temporal-admin-tools tctl namespace list
# Describe your namespace
docker compose exec temporal-admin-tools tctl namespace describe moose-workflowsYou should see details about the namespace including its retention policy.
After successful initialization, modify your configuration for production use:
docker compose stop temporal temporal-ui temporal-admin-toolstemporalio/auto-setup to temporalio/serverSKIP_SCHEMA_SETUP=trueExample change:
# From:image: temporalio/auto-setup:${TEMPORAL_VERSION} # To:image: temporalio/server:${TEMPORAL_VERSION} # And change:- AUTO_SETUP=true- SKIP_SCHEMA_SETUP=false # To:- AUTO_SETUP=false- SKIP_SCHEMA_SETUP=truedocker compose up -d temporal temporal-ui temporal-admin-toolsdocker compose psStart all services with Docker Compose:
docker compose up -dFor production, create a systemd service to ensure Docker Compose starts automatically on system boot:
sudo vim /etc/systemd/system/moose-stack.service[Unit]
Description=Moose Stack
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/path/to/your/compose/directory
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
sudo systemctl enable moose-stack.service
sudo systemctl start moose-stack.serviceYou get a smooth deployment process with these options:
Alternatively, for manual deployment:
moose builddocker compose up -dNo more worrying about unexpected outages or performance issues. Set up proper monitoring: