Moose

Developing

Exposing Analytics via API

Consumption APIs

Viewing typescript

switch to python

Overview

Consumption APIs make it easy to build type-safe HTTP GET endpoints for surfacing data from your OLAP database. These APIs can help power user-facing analytics, dashboards and other front-end components, or even enable AI tools to interact with your data.

Working with Consumption APIs

Model your API parameters

Define the names and types of the parameters your API will accept

Write a route handler function

Write a simple, strongly-typed function that constructs a SQL query and returns the results

Moose Generates Validation & Docs

Moose automatically generates validation, type conversion, and OpenAPI documentation for your API request parameters and response body

Integrate with your API client

Use the generated API spec for your frontend applications or use other tools to interact with your API

Creating API Endpoints

ExampleApi.ts
import { ConsumptionApi } from "@514labs/moose-lib";
import { SourcePipeline } from "path/to/SourcePipeline"; 
 
// Define the query parameters
interface QueryParams {
  filterField: string;
  maxResults: number;
}
 
// Model the query result type
interface ResultItem {
  id: number;
  name: string;
  value: number;
}
 
 
const SourceTable = SourcePipeline.table!; // Use `!` to assert that the table is not null
const cols = SourceTable.columns;
 
// Define the result type as an array of the result item type
const exampleApi = new ConsumptionApi<QueryParams, ResultItem[]>("example_endpoint",  
    async ({ filterField, maxResults }: QueryParams, { client, sql }) => {
        const query = sql`
        SELECT 
          ${cols.id},
          ${cols.name},
          ${cols.value}
        FROM ${SourceTable}
        WHERE category = ${filterField}
        LIMIT ${maxResults}`;
        
        // Set the result type to the type of the each row in the result set
        const resultSet = await client.query.execute<ResultItem>(query);
 
        // Return the result set as an array of the result item type
        return await resultSet.json();
    });

The ConsumptionApi class takes:

  • Route name: The URL path to access your API (e.g., "example_endpoint")
  • Handler function: Processes requests with typed parameters and returns the result

The generic type parameters specify:

  • QueryParams: The structure of accepted URL parameters
  • ResponseBody: The exact shape of your API’s response data

MooseTip:

You can name these types anything you want. The first type generates validation for query parameters, while the second defines the response structure for OpenAPI documentation.

Moose automatically handles:

URL parameter validation and type conversion

SQL query interpolation and execution

Response formatting

Automated OpenAPI documentation

Defining Query Parameters

Define your API’s parameters as a TypeScript interface:

ExampleQueryParams.ts
interface QueryParams {
  filterField: string;
  maxResults: number;
  optionalParam?: string; // Not required for client to provide
}

Moose automatically handles:

  • Runtime validation
  • Clear error messages for invalid parameters
  • OpenAPI documentation generation

Warning:

Complex nested objects and arrays are not supported. Consumption APIs are GET endpoints designed to be simple and lightweight.

Adding Advanced Type Validation

Moose uses Typia to extract type definitions and provide runtime validation. Use Typia’s tags for more complex validation:

ExampleQueryParams.ts
interface QueryParams {
  filterField: string;
  // Ensure maxResults is a positive integer
  maxResults: number & tags.Type<"int64"> & tags.Minimum<"1">; 
}

Common Validation Options

ValidationExamples.ts
interface QueryParams {
  // Numeric validations
  id: number & tags.Type<"uint32">;  // Positive integer (0 to 4,294,967,295)
  age: number & tags.Minimum<18> & tags.Maximum<120>;  // Range: 18 <= age <= 120
  price: number & tags.ExclusiveMinimum<0> & tags.ExclusiveMaximum<1000>;  // Range: 0 < price < 1000
  discount: number & tags.MultipleOf<0.5>;  // Must be multiple of 0.5
  
  // String validations
  username: string & tags.MinLength<3> & tags.MaxLength<20>;  // Length between 3-20 characters
  email: string & tags.Format<"email">;  // Valid email format
  zipCode: string & tags.Pattern<"^[0-9]{5}$">;  // 5 digits
  uuid: string & tags.Format<"uuid">;  // Valid UUID
  ipAddress: string & tags.Format<"ipv4">; // Valid IPv4 address
  
  // Date validations
  startDate: string & tags.Format<"date">;  // YYYY-MM-DD format
  
  // Enum validation
  status: string & tags.Enum<"active" | "pending" | "inactive">;  // Must be one of these values
  
  // Optional parameters
  limit?: number & tags.Type<"uint32"> & tags.Maximum<100>;  // Optional, if provided: positive integer <= 100
  
  // Combined validations
  searchTerm?: (string & tags.MinLength<3>) | null;  // Either null or string with ≥3 characters
}

Notice its just regular TypeScript union types. For a full list of validation options, see the Typia documentation.

Setting Default Values

You can set default values for parameters by setting values for each parameter in the API route handler function signature:

ExampleQueryParams.ts
interface QueryParams {
  filterField: string;
  maxResults: number;
  optionalParam?: string; // Not required for client to provide
}
 
const api = new ConsumptionApi<QueryParams, ResponseBody>("example_endpoint",
  async ({ filterField = "example", maxResults = 10, optionalParam = "default" }, { client, sql }) => {
    // Your logic here...
  }
);

Implementing Route Handler

Consunmption API route handlers are regular functions, so you can implement whatever arbitrary logic you want inside these functions. The primary purpose of the handler function is to construct a SQL query and return the results to the client:

/apis/getBar.ts
import { ConsumptionApi } from "@514labs/moose-lib";
import { BarAggregatedMV } from "../views/barAggregated";
import { tags } from "typia";
 
// This file is where you can define your APIs to consume your data
interface QueryParams {
  orderBy?: "totalRows" | "rowsWithText" | "maxTextLength" | "totalTextLength";
  limit?: number;
  startDay?: number & tags.Type<"int32">;
  endDay?: number & tags.Type<"int32">;
}
 
interface ResponseData {
  dayOfMonth: number;
  totalRows?: number;
  rowsWithText?: number;
  maxTextLength?: number;
  totalTextLength?: number;
}
 
// Utility to get the columns of the target table
const cols = BarAggregatedMV.targetTable.columns
 
export const BarApi = new ConsumptionApi<QueryParams, ResponseData[]>(
  "bar",
  async (
    { orderBy = "totalRows", limit = 5, startDay = 1, endDay = 31 },
    { client, sql },
  ) => {
    const query = sql`
        SELECT 
          ${cols.dayOfMonth},
          ${cols[orderBy]}
        FROM ${BarAggregatedMV.targetTable}
        WHERE 
          dayOfMonth >= ${startDay} 
          AND dayOfMonth <= ${endDay}
        ORDER BY ${cols[orderBy]} DESC
        LIMIT ${limit}
      `;
 
    const data = await client.query.execute<ResponseData>(query);
    return await data.json();
  },
);
 

Connecting to the Database

Moose provides an instance of a MooseClient to your route handler function. This client provides access to the database and other Moose resources:

ExampleApi.ts
import { ConsumptionUtil } from "@514labs/moose-lib";
import { UserTable } from "./UserTable";
 
async function handler({ client, sql }: ConsumptionUtil) {
  const query = sql`SELECT * FROM ${UserTable}`;
  const data = await client.query.execute<UserSchema>(query);
}

Pass the type of the result to the client.query.execute<T>() method to ensure type safety.

Constructing Safe SQL Queries

The sql template literal in Moose provides type-safe query construction with protection against SQL injection. Below are some examples of common patterns for builing safe queries:

Basic Query Parameter Interpolation

BasicInterpolation.ts
import { UserTable } from "./UserTable";
 
const minAge = 18;
const userRole = "admin";
 
const query = sql`
  SELECT * FROM ${UserTable}
  WHERE age > ${minAge} 
  AND role = ${userRole}
`;
 
// MooseClient handles type conversion and escaping
const data = await client.query.execute<UserSchema>(query);
 
// EXECUTION: SELECT * FROM users WHERE age > 18 AND role = 'admin'

Table and Column References

Reference tables and columns directly from your Moose objects as variables in your sql template literals:

ExampleTableColumnQuery.ts
import { userTable } from "../tables/userTable";
 
const query = sql`
  SELECT 
    ${UserTable.columns.id},
    ${UserTable.columns.name},
    ${UserTable.columns.email}
  FROM ${UserTable}
  WHERE ${UserTable.columns.isActive} = true
`;
 
// EXECUTION: SELECT id, name, email FROM users WHERE is_active = true

Type Safety

Static type checking ensures you only reference columns that actually exist.

Table and Column References

Reference tables and columns directly from your Moose objects as variables in your sql template literals:

ExampleTableColumnQuery.ts
import { userTable } from "../tables/userTable";
 
const query = sql`
  SELECT 
    ${UserTable.columns.id},
    ${UserTable.columns.name},
    ${UserTable.columns.email}
  FROM ${UserTable}
  WHERE ${UserTable.columns.isActive} = true
`;
 
// EXECUTION: SELECT id, name, email FROM users WHERE is_active = true

Advanced Query Patterns

Dynamic Column & Table Selection

Use ConsumptionHelpers to handle dynamic column and table references in your queries:

DynamicColumns.ts
import { ConsumptionHelpers as CH } from "@514labs/moose-lib";
 
interface QueryParams {
  sortBy: string;  // Column to sort by
  fields: string;  // Comma-separated list of columns to select (e.g., "id,name,email")
}
 
const queryHandler = async ({ sortBy = "id", fields = "id,name" }: QueryParams, { client, sql }) => {
  // Split the comma-separated string into individual fields
  const fieldList = fields.split(',').map(f => f.trim());
  
  // Build the query by selecting each column individually
  const query = sql`
    SELECT 
      ${fieldList.map(field => sql`${CH.column(field)}`).join(', ')}
    FROM ${userTable}
    ORDER BY ${CH.column(sortBy)}
  `;
  
  // MooseClient converts fieldList to valid ClickHouse identifiers
  return client.query.execute(query);
 
  // EXECUTION: `SELECT id, name FROM users ORDER BY id`
};
DynamicTables.ts
import { ConsumptionHelpers as CH } from "@514labs/moose-lib";
 
interface QueryParams {
  tableName: string;
}
 
const queryHandler = async ({ tableName = "users" }: QueryParams, { client, sql }) => {
  const query = sql`
    SELECT * FROM ${CH.table(tableName)}
  `;
 
  // MooseClient converts tableName to a valid ClickHouse identifier
  return client.query.execute(query);
 
  // EXECUTION: `SELECT * FROM users`
};

Conditional WHERE Clauses

Build WHERE clauses based on provided parameters:

ConditionalColumns.ts
interface FilterParams {
  minAge?: number;
  status?: "active" | "inactive";
  searchText?: string;
}
 
const buildQuery = ({ minAge, status, searchText }: FilterParams, { sql }) => {
  let conditions = [];
  
  if (minAge !== undefined) {
    conditions.push(sql`age >= ${minAge}`);
  }
  
  if (status) {
    conditions.push(sql`status = ${status}`);
  }
  
  if (searchText) {
    conditions.push(sql`(name ILIKE ${'%' + searchText + '%'} OR email ILIKE ${'%' + searchText + '%'})`);
  }
  
  // Build the full query with conditional WHERE clause
  let query = sql`SELECT * FROM ${userTable}`;
  
  if (conditions.length > 0) {
    // Join conditions with AND operator
    let whereClause = conditions.join(' AND ');
    query = sql`${query} WHERE ${whereClause}`;
  }
  
  query = sql`${query} ORDER BY created_at DESC`;
  return query;
};

Adding Authentication

Moose supports authentication via JSON web tokens (JWTs). When your client makes a request to your Consumption API, Moose will automatically parse the JWT and pass the authenticated payload to your handler function as the jwt object:

async (
  { orderBy = "totalRows", limit = 5 },
  { client, sql, jwt }
) => {
  // Use jwt.userId to filter data for the current user
  const query = sql`
    SELECT * FROM userReports 
    WHERE user_id = ${jwt.userId}
    LIMIT ${limit}
  `;
  
  return client.query.execute(query);
}

JWT Error Handling

Moose validates the JWT signature and ensures the JWT is properly formatted. If the JWT authentication fails, Moose will return a 401 Unauthorized error.

Understanding Response Codes

Moose automatically provides standard HTTP responses:

Status CodeMeaningResponse Body
200SuccessYour API’s result data
400Validation error{ "error": "Detailed message"}
401Unauthorized{ "error": "Unauthorized"}
500Internal server error{ "error": "Internal server error"}

Post-Processing Query Results

After executing your database query, you can transform the data before returning it to the client. This allows you to:

Common post-processing operations:

Transform field names or data formats

Calculate derived values

Filter or sort results

Aggregate or group data

Apply business logic

PostProcessingExample.ts
interface QueryParams {
  category: string;
  maxResults: number;
}
 
interface ResponseBody {
  itemId: number;
  displayName: string;
  formattedValue: string;
  isHighValue: boolean;
  date: string;
}
 
const processDataApi = new ConsumptionApi<QueryParams, ResponseBody>(
  "process_data_endpoint",
  async ({ category, maxResults = 10 }, { client, sql }) => {
    // 1. Fetch raw data
    const query = sql`
      SELECT id, name, value, timestamp
      FROM data_table
      WHERE category = ${category}
      LIMIT ${maxResults}
    `;
    
    const rawResults = await client.query.execute<{
      id: number;
      name: string;
      value: number;
      timestamp: string;
    }>(query);
    
    // 2. Post-process the results
    return rawResults.map(row => ({
      // Transform field names
      itemId: row.id,
      displayName: row.name.toUpperCase(),
      
      // Add derived fields
      formattedValue: `$${row.value.toFixed(2)}`,
      isHighValue: row.value > 1000,
      
      // Format dates
      date: new Date(row.timestamp).toISOString().split('T')[0]
    }));
  }
);

Best Practices

Post-Processing Best Practices

Prefer database processing for large datasets

When working with large amounts of data, perform as much filtering, grouping, and aggregation as possible in your SQL query

Keep response size reasonable

Post-process to reduce response size when needed, especially for user-facing APIs

Format dates and numbers consistently

Ensure consistent formatting for dates, currencies, and other values in your responses

Handle sensitive data appropriately

Use post-processing to remove or mask sensitive information before returning data to clients

Add clear error handling

Include appropriate error handling in your post-processing logic

MooseTip:

While post-processing gives you flexibility, remember that database operations are typically more efficient for heavy data manipulation. Reserve post-processing for transformations that are difficult to express in SQL or that involve application-specific logic.

Next Steps: Frontend Integration Examples

Ready to integrate your Consumption APIs with frontend applications? Check out our Templates page for examples of how to integrate Moose Consumption APIs with frontend applications.

Explore Templates

Moose + Next.js

Github Analytics Dashboard

Trending repository analysis with Next.js charts and data tables
Moose + Streamlit

Live Heart Rate Monitoring

Continuous data ingestion from IoT sensors with real-time charts