# Moose / Apis Documentation – TypeScript ## Included Files 1. moose/apis/admin-api.mdx 2. moose/apis/analytics-api.mdx 3. moose/apis/auth.mdx 4. moose/apis/ingest-api.mdx 5. moose/apis/openapi-sdk.mdx 6. moose/apis/trigger-api.mdx ## admin-api Source: moose/apis/admin-api.mdx # Coming Soon --- ## APIs Source: moose/apis/analytics-api.mdx APIs for Moose # APIs ## Overview APIs are functions that run on your server and automatically exposed as HTTP `GET` endpoints. They are designed to read data from your OLAP database. Out of the box, these APIs provide: - Automatic type validation and type conversion for your query parameters, which are sent in the URL, and response body - Managed database client connection - Automatic OpenAPI documentation generation Common use cases include: - Powering user-facing analytics, dashboards and other front-end components - Enabling AI tools to interact with your data - Building custom APIs for your internal tools ### Enabling APIs Analytics APIs are enabled by default. To explicitly control this feature in your `moose.config.toml`: ```toml filename="moose.config.toml" copy [features] apis = true ``` ### Basic Usage ```ts filename="ExampleApi.ts" copy // Define the query parameters interface QueryParams { filterField: string; maxResults: number; } // Model the query result type interface ResultItem { id: number; name: string; value: number; } const SourceTable = SourcePipeline.table!; // Use `!` to assert that the table is not null const cols = SourceTable.columns; // Define the result type as an array of the result item type : QueryParams, { client, sql }) => { const query = sql` SELECT ${cols.id}, ${cols.name}, ${cols.value} FROM ${SourceTable} WHERE category = ${filterField} LIMIT ${maxResults}`; // Set the result type to the type of the each row in the result set const resultSet = await client.query.execute(query); // Return the result set as an array of the result item type return await resultSet.json(); }); ``` ```ts filename="SourcePipeline.ts" copy interface SourceSchema { id: number; name: string; value: number; } ); ``` The `Api` class takes: - Route name: The URL path to access your API (e.g., `"example_endpoint"`) - Handler function: Processes requests with typed parameters and returns the result The generic type parameters specify: - `QueryParams`: The structure of accepted URL parameters - `ResponseBody`: The exact shape of your API's response data You can name these types anything you want. The first type generates validation for query parameters, while the second defines the response structure for OpenAPI documentation. ## Type Validation You can also model the query parameters and response body as interfaces, which Moose will use to provide automatic type validation and type conversion for your query parameters, which are sent in the URL, and response body. ### Modeling Query Parameters Define your API's parameters as a TypeScript interface: ```ts filename="ExampleQueryParams.ts" copy interface QueryParams { filterField: string; maxResults: number; optionalParam?: string; // Not required for client to provide } ``` Moose automatically handles: - Runtime validation - Clear error messages for invalid parameters - OpenAPI documentation generation Complex nested objects and arrays are not supported. Analytics APIs are `GET` endpoints designed to be simple and lightweight. ### Adding Advanced Type Validation Moose uses [Typia](https://typia.io/) to extract type definitions and provide runtime validation. Use Typia's tags for more complex validation: ```ts filename="ExampleQueryParams.ts" copy interface QueryParams { filterField: string; // Ensure maxResults is a positive integer maxResults: number & tags.Type<"int64"> & tags.Minimum<"1">; } ``` ### Common Validation Options ```ts filename="ValidationExamples.ts" copy interface QueryParams { // Numeric validations id: number & tags.Type<"uint32">; // Positive integer (0 to 4,294,967,295) age: number & tags.Minimum<18> & tags.Maximum<120>; // Range: 18 <= age <= 120 price: number & tags.ExclusiveMinimum<0> & tags.ExclusiveMaximum<1000>; // Range: 0 < price < 1000 discount: number & tags.MultipleOf<0.5>; // Must be multiple of 0.5 // String validations username: string & tags.MinLength<3> & tags.MaxLength<20>; // Length between 3-20 characters email: string & tags.Format<"email">; // Valid email format zipCode: string & tags.Pattern<"^[0-9]{5}$">; // 5 digits uuid: string & tags.Format<"uuid">; // Valid UUID ipAddress: string & tags.Format<"ipv4">; // Valid IPv4 address // Date validations startDate: string & tags.Format<"date">; // YYYY-MM-DD format // Literal validation status: "active" | "pending" | "inactive"; // Must be one of these values // Optional parameters limit?: number & tags.Type<"uint32"> & tags.Maximum<100>; // Optional, if provided: positive integer <= 100 // Combined validations searchTerm?: (string & tags.MinLength<3>) | null; // Either null or string with ≥3 characters } ``` Notice its just regular TypeScript union types. For a full list of validation options, see the [Typia documentation](https://typia.io/api/tags). You can derive a safe orderBy union from your actual table columns and use it directly in SQL: ```ts filename="ValidationExamples.ts" copy interface MyTableSchema { column1: string; column2: number; column3: string; } const MyTable = new OlapTable("my_table"); interface QueryParams { orderByColumn: keyof MyTableSchema; // validates against the column names in "my_table" } ``` ### Setting Default Values You can set default values for parameters by setting values for each parameter in the API route handler function signature: ```ts filename="ExampleQueryParams.ts" copy {9} interface QueryParams { filterField: string; maxResults: number; optionalParam?: string; // Not required for client to provide } const api = new Api("example_endpoint", async ({ filterField = "example", maxResults = 10, optionalParam = "default" }, { client, sql }) => { // Your logic here... } ); ``` ## Implementing Route Handler API route handlers are regular functions, so you can implement whatever arbitrary logic you want inside these functions. Most of the time you will be use APIs to expose your data to your front-end applications or other tools: ### Connecting to the Database Moose provides a managed `MooseClient` to your function execution context. This client provides access to the database and other Moose resources, and handles connection pooling/lifecycle management for you: ```ts filename="ExampleApi.ts" copy {1} async function handler({ client, sql }: ApiUtil) { const query = sql`SELECT * FROM ${UserTable}`; const data = await client.query.execute(query); } ``` Pass the type of the result to the `client.query.execute()` method to ensure type safety. ```ts filename="UserTable.ts" copy interface UserSchema { id: Key name: string email: string } const queryHandler = async ({ sortBy = "id", fields = "id,name" }: QueryParams, { client, sql }) => { // Split the comma-separated string into individual fields const fieldList = fields.split(',').map(f => f.trim()); // Build the query by selecting each column individually const query = sql` SELECT ${fieldList.map(field => sql`${CH.column(field)}`).join(', ')} FROM ${userTable} ORDER BY ${CH.column(sortBy)} `; // MooseClient converts fieldList to valid ClickHouse identifiers return client.query.execute(query); // EXECUTION: `SELECT id, name FROM users ORDER BY id` }; ``` ```ts filename="DynamicTables.ts" copy interface QueryParams { tableName: string; } const queryHandler = async ({ tableName = "users" }: QueryParams, { client, sql }) => { const query = sql` SELECT * FROM ${CH.table(tableName)} `; // MooseClient converts tableName to a valid ClickHouse identifier return client.query.execute(query); // EXECUTION: `SELECT * FROM users` }; ``` #### Conditional `WHERE` Clauses Build `WHERE` clauses based on provided parameters: ```ts filename="ConditionalColumns.ts" copy interface FilterParams { minAge?: number; status?: "active" | "inactive"; searchText?: string; } const buildQuery = ({ minAge, status, searchText }: FilterParams, { sql }) => { let conditions = []; if (minAge !== undefined) { conditions.push(sql`age >= ${minAge}`); } if (status) { conditions.push(sql`status = ${status}`); } if (searchText) { conditions.push(sql`(name ILIKE ${'%' + searchText + '%'} OR email ILIKE ${'%' + searchText + '%'})`); } // Build the full query with conditional WHERE clause let query = sql`SELECT * FROM ${userTable}`; if (conditions.length > 0) { // Join conditions with AND operator let whereClause = conditions.join(' AND '); query = sql`${query} WHERE ${whereClause}`; } query = sql`${query} ORDER BY created_at DESC`; return query; }; ``` ### Adding Authentication Moose supports authentication via JSON web tokens (JWTs). When your client makes a request to your Analytics API, Moose will automatically parse the JWT and pass the **authenticated** payload to your handler function as the `jwt` object: ```typescript async ( { orderBy = "totalRows", limit = 5 }, { client, sql, jwt } ) => { // Use jwt.userId to filter data for the current user const query = sql` SELECT * FROM userReports WHERE user_id = ${jwt.userId} LIMIT ${limit} `; return client.query.execute(query); } ``` Moose validates the JWT signature and ensures the JWT is properly formatted. If the JWT authentication fails, Moose will return a `401 Unauthorized error`. ## Understanding Response Codes Moose automatically provides standard HTTP responses: | Status Code | Meaning | Response Body | |-------------|-------------------------|---------------------------------| | 200 | Success | Your API's result data | | 400 | Validation error | `{ "error": "Detailed message"}`| | 401 | Unauthorized | `{ "error": "Unauthorized"}` | | 500 | Internal server error | `{ "error": "Internal server error"}` | ## Post-Processing Query Results After executing your database query, you can transform the data before returning it to the client. This allows you to: ```ts filename="PostProcessingExample.ts" copy interface QueryParams { category: string; maxResults: number; } interface ResponseBody { itemId: number; displayName: string; formattedValue: string; isHighValue: boolean; date: string; } const processDataApi = new Api( "process_data_endpoint", async ({ category, maxResults = 10 }, { client, sql }) => { // 1. Fetch raw data const query = sql` SELECT id, name, value, timestamp FROM data_table WHERE category = ${category} LIMIT ${maxResults} `; const rawResults = await client.query.execute<{ id: number; name: string; value: number; timestamp: string; }>(query); // 2. Post-process the results return rawResults.map(row => ({ // Transform field names itemId: row.id, displayName: row.name.toUpperCase(), // Add derived fields formattedValue: `$${row.value.toFixed(2)}`, isHighValue: row.value > 1000, // Format dates date: new Date(row.timestamp).toISOString().split('T')[0] })); } ); ``` ### Best Practices While post-processing gives you flexibility, remember that database operations are typically more efficient for heavy data manipulation. Reserve post-processing for transformations that are difficult to express in SQL or that involve application-specific logic. ## Client Integration By default, all API endpoints are automatically integrated with OpenAPI/Swagger documentation. You can integrate your OpenAPI SDK generator of choice to generate client libraries for your APIs. Please refer to the [OpenAPI](/moose/apis/open-api-sdk) page for more information on how to integrate your APIs with OpenAPI. --- ## API Authentication & Security Source: moose/apis/auth.mdx Secure your Moose API endpoints with JWT tokens or API keys # API Authentication & Security Moose supports two authentication mechanisms for securing your API endpoints: - **[API Keys](#api-key-authentication)** - Simple, static authentication for internal applications and getting started - **[JWT (JSON Web Tokens)](#jwt-authentication)** - Token-based authentication for integration with existing identity providers Choose the method that fits your use case, or use both together with custom configuration. ## Do you want to use API Keys? API keys are the simplest way to secure your Moose endpoints. They're ideal for: - Internal applications and microservices - Getting started quickly with authentication - Scenarios where you control both client and server ### How API Keys Work API keys use PBKDF2 HMAC SHA256 hashing for secure storage. You generate a token pair (plain-text and hashed) using the Moose CLI, store the hashed version in environment variables, and send the plain-text version in your request headers. ### Step 1: Generate API Keys Generate tokens and hashed keys using the Moose CLI: ```bash moose generate hash-token ``` **Output:** - **ENV API Keys**: Hashed key for environment variables (use this in your server configuration) - **Bearer Token**: Plain-text token for client applications (use this in `Authorization` headers) Use the **hashed key** for environment variables and `moose.config.toml`. Use the **plain-text token** in your `Authorization: Bearer token` headers. ### Step 2: Configure API Keys with Environment Variables Set environment variables with the **hashed** API keys you generated: ```bash # For ingest endpoints export MOOSE_INGEST_API_KEY='your_pbkdf2_hmac_sha256_hashed_key' # For analytics endpoints export MOOSE_CONSUMPTION_API_KEY='your_pbkdf2_hmac_sha256_hashed_key' # For admin endpoints export MOOSE_ADMIN_TOKEN='your_plain_text_token' ``` Or set the admin API key in `moose.config.toml`: ```toml filename="moose.config.toml" [authentication] admin_api_key = "your_pbkdf2_hmac_sha256_hashed_key" ``` Storing the `admin_api_key` (which is a PBKDF2 HMAC SHA256 hash) in your `moose.config.toml` file is an acceptable practice, even if the file is version-controlled. This is because the actual plain-text Bearer token (the secret) is not stored. The hash is computationally expensive to reverse, ensuring that your secret is not exposed in the codebase. ### Step 3: Make Authenticated Requests All authenticated requests require the `Authorization` header with the **plain-text token**: ```bash # Using curl curl -H "Authorization: Bearer your_plain_text_token_here" \ https://your-moose-instance.com/ingest/YourDataModel # Using JavaScript fetch('https://your-moose-instance.com/api/endpoint', { headers: { 'Authorization': 'Bearer your_plain_text_token_here' } }) ``` ## Do you want to use JWTs? JWT authentication integrates with existing identity providers and follows standard token-based authentication patterns. Use JWTs when: - You have an existing identity provider (Auth0, Okta, etc.) - You need user-specific authentication and authorization - You want standard OAuth 2.0 / OpenID Connect flows ### How JWT Works Moose validates JWT tokens using RS256 algorithm with your identity provider's public key. You configure the expected issuer and audience, and Moose handles token verification automatically. ### Step 1: Configure JWT Settings #### Option A: Configure in `moose.config.toml` ```toml filename=moose.config.toml [jwt] # Your JWT public key (PEM-formatted RSA public key) secret = """ -----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAy... -----END PUBLIC KEY----- """ # Expected JWT issuer issuer = "https://my-auth-server.com/" # Expected JWT audience audience = "my-moose-app" ``` The `secret` field should contain your JWT **public key** used to verify signatures using RS256 algorithm. #### Option B: Configure with Environment Variables You can also set these values as environment variables: ```bash filename=".env" copy MOOSE_JWT_PUBLIC_KEY=your_jwt_public_key # PEM-formatted RSA public key (overrides `secret` in `moose.config.toml`) MOOSE_JWT_ISSUER=your_jwt_issuer # Expected JWT issuer (overrides `issuer` in `moose.config.toml`) MOOSE_JWT_AUDIENCE=your_jwt_audience # Expected JWT audience (overrides `audience` in `moose.config.toml`) ``` ### Step 2: Make Authenticated Requests Send requests with the JWT token in the `Authorization` header: ```bash # Using curl curl -H "Authorization: Bearer your_jwt_token_here" \ https://your-moose-instance.com/ingest/YourDataModel # Using JavaScript fetch('https://your-moose-instance.com/api/endpoint', { headers: { 'Authorization': 'Bearer your_jwt_token_here' } }) ``` ## Want to use both? Here's the caveats You can configure both JWT and API Key authentication simultaneously. When both are configured, Moose's authentication behavior depends on the `enforce_on_all_*` flags. ### Understanding Authentication Priority #### Default Behavior (No Enforcement) By default, when both JWT and API Keys are configured, Moose tries JWT validation first, then falls back to API Key validation: ```toml filename="moose.config.toml" [jwt] # JWT configuration secret = "..." issuer = "https://my-auth-server.com/" audience = "my-moose-app" # enforce flags default to false ``` ```bash filename=".env" # API Key configuration MOOSE_INGEST_API_KEY='your_pbkdf2_hmac_sha256_hashed_key' MOOSE_CONSUMPTION_API_KEY='your_pbkdf2_hmac_sha256_hashed_key' ``` **For Ingest Endpoints (`/ingest/*`)**: - Attempts JWT validation first (RS256 signature check) - Falls back to API Key validation (PBKDF2 HMAC SHA256) if JWT fails **For Analytics Endpoints (`/api/*`)**: - Same fallback behavior as ingest endpoints This allows you to use either authentication method for your clients. #### Enforcing JWT Only If you want to **only** accept JWT tokens (no API key fallback), set the enforcement flags: ```toml filename="moose.config.toml" [jwt] secret = "..." issuer = "https://my-auth-server.com/" audience = "my-moose-app" # Only accept JWT, no API key fallback enforce_on_all_ingest_apis = true enforce_on_all_consumptions_apis = true ``` **Result**: When enforcement is enabled, API Key authentication is disabled even if the environment variables are set. Only valid JWT tokens will be accepted. ### Common Use Cases #### Use Case 1: Different Auth for Different Endpoints Configure JWT for user-facing analytics endpoints, API keys for internal ingestion: ```toml filename="moose.config.toml" [jwt] secret = "..." issuer = "https://my-auth-server.com/" audience = "my-moose-app" enforce_on_all_consumptions_apis = true # JWT only for /api/* enforce_on_all_ingest_apis = false # Allow fallback for /ingest/* ``` ```bash filename=".env" MOOSE_INGEST_API_KEY='hashed_key_for_internal_services' ``` #### Use Case 2: Migration from API Keys to JWT Start with both configured, no enforcement. Gradually migrate clients to JWT. Once complete, enable enforcement: ```toml filename="moose.config.toml" [jwt] secret = "..." issuer = "https://my-auth-server.com/" audience = "my-moose-app" # Start with both allowed during migration enforce_on_all_ingest_apis = false enforce_on_all_consumptions_apis = false # Later, enable to complete migration # enforce_on_all_ingest_apis = true # enforce_on_all_consumptions_apis = true ``` ### Admin Endpoints Admin endpoints use API key authentication exclusively (configured separately from ingest/analytics endpoints). **Configuration precedence** (highest to lowest): 1. `--token` CLI parameter (plain-text token) 2. `MOOSE_ADMIN_TOKEN` environment variable (plain-text token) 3. `admin_api_key` in `moose.config.toml` (hashed token) **Example:** ```bash # Option 1: CLI parameter moose remote plan --token your_plain_text_token # Option 2: Environment variable export MOOSE_ADMIN_TOKEN='your_plain_text_token' moose remote plan # Option 3: Config file # In moose.config.toml: # [authentication] # admin_api_key = "your_pbkdf2_hmac_sha256_hashed_key" ``` ## Security Best Practices - **Never commit plain-text tokens to version control** - Always use hashed keys in configuration files - **Use environment variables for production** - Keep secrets out of your codebase - **Generate unique tokens for different environments** - Separate development, staging, and production credentials - **Rotate tokens regularly** - Especially for long-running production deployments - **Choose the right method for your use case**: - Use **API Keys** for internal services and getting started - Use **JWT** when integrating with identity providers or need user-level auth - **Store hashed keys safely** - The PBKDF2 HMAC SHA256 hash in `moose.config.toml` is safe to version control, but the plain-text token should only exist in secure environment variables or secret management systems Never commit plain-text tokens to version control. Use hashed keys in configuration files and environment variables for production. --- ## Ingestion APIs Source: moose/apis/ingest-api.mdx Ingestion APIs for Moose # Ingestion APIs ## Overview Moose Ingestion APIs are the entry point for getting data into your Moose application. They provide a fast, reliable, and type-safe way to move data from your sources into streams and tables for analytics and processing. ## When to Use Ingestion APIs Ingestion APIs are most useful when you want to implement a push-based pattern for getting data from your data sources into your streams and tables. Common use cases include: - Instrumenting external client applications - Receiving webhooks from third-party services - Integrating with ETL or data pipeline tools that push data ## Why Use Moose's APIs Over Your Own? Moose's ingestion APIs are purpose-built for high-throughput data pipelines, offering key advantages over other more general-purpose frameworks: - **Built-in schema validation:** Ensures only valid data enters your pipeline. - **Direct connection to streams/tables:** Instantly link HTTP endpoints to Moose data infrastructure to route incoming data to your streams and tables without any glue code. - **Dead Letter Queue (DLQ) support:** Invalid records are automatically captured for review and recovery. - **OpenAPI auto-generation:** Instantly generate client SDKs and docs for all endpoints, including example data. - **Rust-powered performance:** Far higher throughput and lower latency than typical Node.js or Python APIs. ## Validation Moose validates all incoming data against your interface. If a record fails validation, Moose can automatically route it to a Dead Letter Queue (DLQ) for later inspection and recovery. ```typescript filename="ValidationExample.ts" copy interface ExampleModel { id: string; userId: string; timestamp: Date; properties?: { device?: string; version?: number; } } ); ``` If your IngestPipeline’s schema marks a field as optional but annotates a ClickHouse default, Moose treats: - API request and Stream message: field is optional (you may omit it) - ClickHouse table storage: field is required with a DEFAULT clause Behavior: When the API/stream inserts into ClickHouse and the field is missing, ClickHouse sets it to the configured default value. This keeps request payloads simple while avoiding Nullable columns in storage. Example: `field?: number & ClickHouseDefault<"18">` or `WithDefault` Send a valid event - routed to the destination stream ```typescript filename="ValidEvent.ts" copy fetch("http://localhost:4000/ingest/your-api-route", { method: "POST", body: JSON.stringify({ id: "event1", userId: "user1", timestamp: "2023-05-10T15:30:00Z" }) }) // ✅ Accepted and routed to the destination stream // API returns 200 and { success: true } ``` Send an invalid event (missing required field) - routed to the DLQ ```typescript filename="InvalidEventMissingField.ts" copy fetch("http://localhost:4000/ingest/your-api-route", { method: "POST", body: JSON.stringify({ id: "event1" }) }) // ❌ Routed to DLQ, because it's missing a required field // API returns 400 response ``` Send an invalid event (bad date format) - routed to the DLQ ```typescript filename="InvalidEventBadDate.ts" copy fetch("http://localhost:4000/ingest/your-api-route", { method: "POST", body: JSON.stringify({ id: "event1", userId: "user1", timestamp: "not-a-date" }) }) // ❌ Routed to DLQ, because the timestamp is not a valid date // API returns 400 response ``` ## Creating Ingestion APIs You can create ingestion APIs in two ways: - **High-level:** Using the `IngestPipeline` class (recommended for most use cases) - **Low-level:** Manually configuring the `IngestApi` component for more granular control ### High-level: IngestPipeline (Recommended) The `IngestPipeline` class provides a convenient way to set up ingestion endpoints, streams, and tables with a single declaration: ```typescript filename="AnalyticsPipeline.ts" copy interface ExampleModel { id: string; name: string; value: number; timestamp: Date; } const examplePipeline = new IngestPipeline("example-name", { ingestApi: true, // Creates a REST API endpoint stream: true, // Connects to a stream table: true }); ``` ### Low-level: Standalone IngestApi For more granular control, you can manually configure the `IngestApi` component: ```typescript filename="AnalyticsPipelineManual.ts" copy interface ExampleRecord { id: string; name: string; value: number; timestamp: Date; } // Create the ClickHouse table const exampleTable = new OlapTable("example-table-name"); // Create the stream with specific settings const exampleStream = new Stream("example-stream-name", { destination: exampleTable // Connect stream to table }); // Create the ingestion API const exampleApi = new IngestApi("example-api-route", { destination: exampleStream, // Connect API to stream }); ``` The types of the destination `Stream` and `Table` must match the type of the `IngestApi`. ## Configuration Reference Configuration options for both high-level and low-level ingestion APIs are provided below. ```typescript filename="IngestPipelineConfig.ts" copy interface IngestPipelineConfig { table?: boolean | OlapConfig; stream?: boolean | Omit; ingestApi?: boolean | Omit; deadLetterQueue?: boolean | Omit; version?: string; metadata?: { description?: string; }; lifeCycle?: LifeCycle; } ``` ```typescript filename="IngestConfig.ts" copy interface IngestConfig { destination: Stream; deadLetterQueue?: DeadLetterQueue; version?: string; metadata?: { description?: string; }; } ``` --- ## OpenAPI SDK Generation Source: moose/apis/openapi-sdk.mdx Generate type-safe client SDKs from your Moose APIs # OpenAPI SDK Generation Moose automatically generates OpenAPI specifications for all your APIs, enabling you to create type-safe client SDKs in any language. This allows you to integrate your Moose APIs into any application with full type safety and IntelliSense support. ## Overview While `moose dev` is running, Moose emits an OpenAPI spec at `.moose/openapi.yaml` covering: - **Ingestion endpoints** with request/response schemas - **Analytics APIs** with query parameters and response types Every time you make a change to your Moose APIs, the OpenAPI spec is updated automatically. ## Generating Typed SDKs from OpenAPI You can use your preferred generator to create a client from that spec. Below are minimal, tool-agnostic examples you can copy into your project scripts. ### Setup The following example uses Kubb to generate the SDK. Kubb can be installed into your project without any dependencies on the Java runtime (unlike the OpenAPI Generator which requires Java). Follow the setup instructions for Kubb [here](https://kubb.dev/docs/getting-started/installation). Then, in your project's package.json, add the following script: ```json filename="package.json" copy { "scripts": { "generate-sdk": "kubb generate" } } ``` Finally, configure the `on_reload_complete_script` hook in your `moose.config.toml`: ```toml filename="moose.config.toml" copy [http_server_config] on_reload_complete_script = "npm run generate-sdk" ``` This will trigger the generation CLI command after each reload. ### Hooks for automatic SDK generation The `on_reload_complete_script` hook is available in your `moose.config.toml` file. It runs after each dev server reload when code/infra changes have been fully applied. This allows you to keep your SDKs continuously up to date as you make changes to your Moose APIs. Notes: - The script runs in your project root using your `$SHELL` (falls back to `/bin/sh`). - Paths like `.moose/openapi.yaml` and `./generated/...` are relative to the project root. - You can combine multiple generators with `&&` (as shown) or split into a shell script if preferred. These hooks only affect local development (`moose dev`). The reload hook runs after Moose finishes applying your changes, ensuring `.moose/openapi.yaml` is fresh before regeneration. ## Integration Import from the output path your generator writes to (see your chosen example repo). The Moose side is unchanged: the spec lives at `.moose/openapi.yaml` during `moose dev`. ## Generators Use any OpenAPI-compatible generator: ### TypeScript projects - [OpenAPI Generator (typescript-fetch)](https://github.com/OpenAPITools/openapi-generator) — mature, broad options; generates Fetch-based client - [Kubb](https://github.com/kubb-project/kubb) — generates types + fetch client with simple config - [Orval](https://orval.dev/) — flexible output (client + schemas), good DX - [openapi-typescript](https://github.com/openapi-ts/openapi-typescript) — generates types only (pair with your own client) - [swagger-typescript-api](https://github.com/acacode/swagger-typescript-api) — codegen for TS clients from OpenAPI - [openapi-typescript-codegen](https://github.com/ferdikoomen/openapi-typescript-codegen) — TS client + models - [oazapfts](https://github.com/oazapfts/oazapfts) — minimal TS client based on fetch - [openapi-zod-client](https://github.com/astahmer/openapi-zod-client) — Zod schema-first client generation ### Python projects - [openapi-python-client](https://pypi.org/project/openapi-python-client/) — modern typed client for OpenAPI 3.0/3.1 - [OpenAPI Generator (python)](https://github.com/OpenAPITools/openapi-generator) — multiple Python generators (python, python-nextgen) --- ## Trigger APIs Source: moose/apis/trigger-api.mdx Create APIs that trigger workflows and other processes # Trigger APIs ## Overview You can create APIs to initiate workflows, data processing jobs, or other automated processes. ## Basic Usage ```typescript filename="app/apis/trigger_workflow.ts" copy interface WorkflowParams { inputValue: string; priority?: string; } interface WorkflowResponse { workflowId: string; status: string; } const triggerApi = new Api( "trigger-workflow", async ({ inputValue, priority = "normal" }: WorkflowParams, { client }) => { // Trigger the workflow with input parameters const workflowExecution = await client.workflow.execute("data-processing", { inputValue, priority, triggeredAt: new Date().toISOString() } ); return { workflowId: workflowExecution.id, status: "started" }; } ); ``` ## Using the Trigger API Once deployed, you can trigger workflows via HTTP requests: ```bash filename="Terminal" copy curl "http://localhost:4000/api/trigger-workflow?inputValue=process-user-data&priority=high" ``` Response: ```json { "workflowId": "workflow-12345", "status": "started" } ```