APIs are functions that run on your server and automatically exposed as HTTP GET endpoints.
They are designed to read data from your OLAP database. Out of the box, these APIs provide:
Common use cases include:
Analytics APIs are enabled by default. To explicitly control this feature in your moose.config.toml:
[features]apis = trueimport { Api } from "@514labs/moose-lib";import { SourcePipeline } from "path/to/SourcePipeline"; // Define the query parametersinterface QueryParams { filterField: string; maxResults: number;} // Model the query result typeinterface ResultItem { id: number; name: string; value: number;} const SourceTable = SourcePipeline.table!; // Use `!` to assert that the table is not nullconst cols = SourceTable.columns; // Define the result type as an array of the result item typeexport const exampleApi = new Api<QueryParams, ResultItem[]>("example_endpoint", async ({ filterField, maxResults }: QueryParams, { client, sql }) => { const query = sql` SELECT ${cols.id}, ${cols.name}, ${cols.value} FROM ${SourceTable} WHERE category = ${filterField} LIMIT ${maxResults}`; // Set the result type to the type of the each row in the result set const resultSet = await client.query.execute<ResultItem>(query); // Return the result set as an array of the result item type return await resultSet.json(); });The Api class takes:
"example_endpoint")The generic type parameters specify:
QueryParams: The structure of accepted URL parametersResponseBody: The exact shape of your API's response dataYou can name these types anything you want. The first type generates validation for query parameters, while the second defines the response structure for OpenAPI documentation.
You can also model the query parameters and response body as interfaces (TypeScript) or Pydantic models (Python), which Moose will use to provide automatic type validation and type conversion for your query parameters, which are sent in the URL, and response body.
Define your API's parameters as a Pydantic model:
from pydantic import BaseModelfrom typing import Optional class QueryParams(BaseModel): filterField: str = Field(..., description="The field to filter by") maxResults: int = Field(..., description="The maximum number of results to return") optionalParam: Optional[str] = Field(None, description="An optional parameter")Moose automatically handles:
Complex nested objects and arrays are not supported. Analytics APIs are GET endpoints designed to be simple and lightweight.
Moose uses Pydantic for runtime validation. Use Pydantic's Field class for more complex validation:
from pydantic import BaseModel, Field class QueryParams(BaseModel): filterField: str = Field(pattern=r"^(id|name|email)$", description="The field to filter by") ## Only allow valid column names from the UserTable maxResults: int = Field(gt=0, description="The maximum number of results to return") ## Positive integerfrom pydantic import BaseModel, Field class QueryParams(BaseModel): # Numeric validations id: int = Field(..., gt=0) age: int = Field(..., gt=0, lt=120) price: float = Field(..., gt=0, lt=1000) discount: float = Field(..., gt=0, multiple_of=0.5) # String validations username: str = Field(..., min_length=3, max_length=20) email: str = Field(..., format="email") zipCode: str = Field(..., pattern=r"^[0-9]{5}$") uuid: str = Field(..., format="uuid") ipAddress: str = Field(..., format="ipv4") # Date validations startDate: str = Field(..., format="date") # Enum validation status: str = Field(..., enum=["active", "pending", "inactive"]) # Optional parameters limit: int = Field(None, gt=0, lt=100)For a full list of validation options, see the Pydantic documentation.
You can set default values for parameters by setting values for each parameter in your Pydantic model:
from pydantic import BaseModel class QueryParams(BaseModel): filterField: str = "example" maxResults: int = 10 optionalParam: str | None = "default"API route handlers are regular functions, so you can implement whatever arbitrary logic you want inside these functions. Most of the time you will be use APIs to expose your data to your front-end applications or other tools:
Moose provides a managed MooseClient to your function execution context. This client provides access to the database and other Moose resources, and handles connection pooling/lifecycle management for you:
from moose_lib import Api, MooseClientfrom pydantic import BaseModel, Field, constrfrom typing import Literal, Optionalfrom enum import Enumfrom app.UserTable import UserTable class QueryParams(BaseModel): # When using f-strings, we need extremely strict validation column: str = Field(pattern=r"^(id|name|email)$", description="Uses a regex pattern to only allow valid column names") search_term: str = Field( pattern=r'^[\w\s\'-]{1,50}$', # Allows letters, numbers, spaces, hyphens, apostrophes; Does not allow special characters that could be used in SQL injection strip_whitespace=True, min_length=1, max_length=50 ) limit: int = Field( default=10, ge=1, le=100, description="Number of results to return" ) def run(client: MooseClient, params: QueryParams): query = """ SELECT {column} FROM {table} WHERE name ILIKE '%{search_term}%' LIMIT {limit} """ return client.query.execute(query, {"column": UserTable.cols[params.column], "table": UserTable, "search_term": params.search_term, "limit": params.limit})from app.UserTable import UserTable class QueryParams(BaseModel): colName: str = Field(pattern=r"^(id|name|email)$", description="Uses a regex pattern to only allow valid column names from the UserTable") class QueryResult(BaseModel): id: Optional[int] name: Optional[str] email: Optional[str] def run(client: MooseClient, params: QueryParams): # Put column and table in the dict for variables query = "SELECT {column} FROM {table}" return client.query.execute(query, {"column": UserTable.cols[params.colName], "table": UserTable}) # Create the APIbar = Api[QueryParams, QueryResult](name="bar", query_function=run) # Call the API# HTTP Request: GET http://localhost:4000/api/bar?colName=id# EXECUTED QUERY: SELECT id FROM usersBuild WHERE clauses based on provided parameters:
class FilterParams(BaseModel): min_age: Optional[int] status: Optional[str] = Field(pattern=r"^(active|inactive)$") search_text: Optional[str] = Field(pattern=r"^[a-zA-Z0-9\s]+$", description="Alphanumeric search text without special characters to prevent SQL injection") class QueryResult(BaseModel): id: int name: str email: str def build_query(client: MooseClient, params: FilterParams) -> QueryResult: # Using f-strings with validated parameters conditions = [] parameters = {} if params.min_age: conditions.append("age >= {min_age}") parameters["min_age"] = params.min_age if params.status: conditions.append("status = {status}") parameters["status"] = params.status if params.search_text: conditions.append("(name ILIKE {search_text} OR email ILIKE {search_text})") parameters["search_text"] = params.search_text where_clause = f" WHERE {' AND '.join(conditions)}" if conditions else "" query = f"""SELECT * FROM users {where_clause} ORDER BY created_at DESC""" return client.query.execute(query, parameters) # Create the APIbar = Api[FilterParams, QueryResult](name="bar", query_function=build_query) # Call the API# HTTP Request: GET http://localhost:4000/api/bar?min_age=20&status=active&search_text=John# EXECUTED QUERY: SELECT * FROM users WHERE age >= 20 AND status = 'active' AND (name ILIKE '%John%' OR email ILIKE '%John%') ORDER BY created_at DESCMoose supports authentication via JSON web tokens (JWTs). When your client makes a request to your Analytics API, Moose will automatically parse the JWT and pass the authenticated payload to your handler function as the jwt object:
def run(client: MooseClient, params: QueryParams, jwt: dict): # Use parameter binding with JWT data query = """SELECT * FROM userReports WHERE user_id = {user_id} LIMIT 5""" return client.query.execute(query, {"user_id": jwt["userId"]})Moose validates the JWT signature and ensures the JWT is properly formatted. If the JWT authentication fails, Moose will return a 401 Unauthorized error.
Moose automatically provides standard HTTP responses:
| Status Code | Meaning | Response Body |
|---|---|---|
| 200 | Success | Your API's result data |
| 400 | Validation error | { "error": "Detailed message"} |
| 401 | Unauthorized | { "error": "Unauthorized"} |
| 500 | Internal server error | { "error": "Internal server error"} |
After executing your database query, you can transform the data before returning it to the client. This allows you to:
from datetime import datetimefrom moose_lib import Apifrom pydantic import BaseModel class QueryParams(BaseModel): category: str max_results: int = 10 class ResponseItem(BaseModel): itemId: int displayName: str formattedValue: str isHighValue: bool date: str def run(client: MooseClient, params: QueryParams): # 1. Fetch raw data using parameter binding query = """ SELECT id, name, value, timestamp FROM data_table WHERE category = {category} LIMIT {limit} """ raw_results = client.query.execute(query, {"category": params.category, "limit": params.max_results}) # 2. Post-process the results processed_results = [] for row in raw_results: processed_results.append(ResponseItem( # Transform field names itemId=row['id'], displayName=row['name'].upper(), # Add derived fields formattedValue=f"${row['value']:.2f}", isHighValue=row['value'] > 1000, # Format dates date=datetime.fromisoformat(row['timestamp']).date().isoformat() )) return processed_results # Create the APIprocess_data_api = Api[QueryParams, ResponseItem](name="process_data_endpoint", query_function=run)When working with large amounts of data, perform as much filtering, grouping, and aggregation as possible in your SQL query
Post-process to reduce response size when needed, especially for user-facing APIs
Ensure consistent formatting for dates, currencies, and other values in your responses
Use post-processing to remove or mask sensitive information before returning data to clients
Include appropriate error handling in your post-processing logic
While post-processing gives you flexibility, remember that database operations are typically more efficient for heavy data manipulation. Reserve post-processing for transformations that are difficult to express in SQL or that involve application-specific logic.
By default, all API endpoints are automatically integrated with OpenAPI/Swagger documentation. You can integrate your OpenAPI SDK generator of choice to generate client libraries for your APIs.
Please refer to the OpenAPI page for more information on how to integrate your APIs with OpenAPI.