# Moose / Apis / Ingest Api Documentation – TypeScript ## Included Files 1. moose/apis/ingest-api/ingest-api.mdx ## Ingestion APIs Source: moose/apis/ingest-api/ingest-api.mdx Ingestion APIs for Moose # Ingestion APIs ## Overview Moose Ingestion APIs are the entry point for getting data into your Moose application. They provide a fast, reliable, and type-safe way to move data from your sources into streams and tables for analytics and processing. ## When to Use Ingestion APIs Ingestion APIs are most useful when you want to implement a push-based pattern for getting data from your data sources into your streams and tables. Common use cases include: - Instrumenting external client applications - Receiving webhooks from third-party services - Integrating with ETL or data pipeline tools that push data ## Why Use Moose's APIs Over Your Own? Moose's ingestion APIs are purpose-built for high-throughput data pipelines, offering key advantages over other more general-purpose frameworks: - **Built-in schema validation:** Ensures only valid data enters your pipeline. - **Direct connection to streams/tables:** Instantly link HTTP endpoints to Moose data infrastructure to route incoming data to your streams and tables without any glue code. - **Dead Letter Queue (DLQ) support:** Invalid records are automatically captured for review and recovery. - **OpenAPI auto-generation:** Instantly generate client SDKs and docs for all endpoints, including example data. - **Rust-powered performance:** Far higher throughput and lower latency than typical Node.js or Python APIs. ## Validation Moose validates all incoming data against your interface. If a record fails validation, Moose can automatically route it to a Dead Letter Queue (DLQ) for later inspection and recovery. ```typescript filename="ValidationExample.ts" copy interface ExampleModel { id: string; userId: string; timestamp: Date; properties?: { device?: string; version?: number; } } ); ``` If your IngestPipeline’s schema marks a field as optional but annotates a ClickHouse default, Moose treats: - API request and Stream message: field is optional (you may omit it) - ClickHouse table storage: field is required with a DEFAULT clause Behavior: When the API/stream inserts into ClickHouse and the field is missing, ClickHouse sets it to the configured default value. This keeps request payloads simple while avoiding Nullable columns in storage. Example: `field?: number & ClickHouseDefault<"18">` or `WithDefault` Send a valid event - routed to the destination stream ```typescript filename="ValidEvent.ts" copy fetch("http://localhost:4000/ingest/your-api-route", { method: "POST", body: JSON.stringify({ id: "event1", userId: "user1", timestamp: "2023-05-10T15:30:00Z" }) }) // ✅ Accepted and routed to the destination stream // API returns 200 and { success: true } ``` Send an invalid event (missing required field) - routed to the DLQ ```typescript filename="InvalidEventMissingField.ts" copy fetch("http://localhost:4000/ingest/your-api-route", { method: "POST", body: JSON.stringify({ id: "event1" }) }) // ❌ Routed to DLQ, because it's missing a required field // API returns 400 response ``` Send an invalid event (bad date format) - routed to the DLQ ```typescript filename="InvalidEventBadDate.ts" copy fetch("http://localhost:4000/ingest/your-api-route", { method: "POST", body: JSON.stringify({ id: "event1", userId: "user1", timestamp: "not-a-date" }) }) // ❌ Routed to DLQ, because the timestamp is not a valid date // API returns 400 response ``` ## Creating Ingestion APIs You can create ingestion APIs in two ways: - **High-level:** Using the `IngestPipeline` class (recommended for most use cases) - **Low-level:** Manually configuring the `IngestApi` component for more granular control ### High-level: IngestPipeline (Recommended) The `IngestPipeline` class provides a convenient way to set up ingestion endpoints, streams, and tables with a single declaration: ```typescript filename="AnalyticsPipeline.ts" copy interface ExampleModel { id: string; name: string; value: number; timestamp: Date; } const examplePipeline = new IngestPipeline("example-name", { ingestApi: true, // Creates a REST API endpoint stream: true, // Connects to a stream table: true }); ``` ### Low-level: Standalone IngestApi For more granular control, you can manually configure the `IngestApi` component: ```typescript filename="AnalyticsPipelineManual.ts" copy interface ExampleRecord { id: string; name: string; value: number; timestamp: Date; } // Create the ClickHouse table const exampleTable = new OlapTable("example-table-name"); // Create the stream with specific settings const exampleStream = new Stream("example-stream-name", { destination: exampleTable // Connect stream to table }); // Create the ingestion API const exampleApi = new IngestApi("example-api-route", { destination: exampleStream, // Connect API to stream }); ``` The types of the destination `Stream` and `Table` must match the type of the `IngestApi`. ## Configuration Reference Configuration options for both high-level and low-level ingestion APIs are provided below. ```typescript filename="IngestPipelineConfig.ts" copy interface IngestPipelineConfig { table?: boolean | OlapConfig; stream?: boolean | Omit; ingestApi?: boolean | Omit; deadLetterQueue?: boolean | Omit; version?: string; metadata?: { description?: string; }; lifeCycle?: LifeCycle; } ``` ```typescript filename="IngestConfig.ts" copy interface IngestConfig { destination: Stream; deadLetterQueue?: DeadLetterQueue; version?: string; metadata?: { description?: string; }; } ```