What You Can Do With Lamina
Lamina lets you run packaged AI workflows, called apps, over a simple HTTP API. With the public Apps API you can:- discover apps available to your workspace
- inspect each app’s input parameters
- execute apps asynchronously
- receive results via webhook or poll for outputs
- get images, videos, or text as output
- backend automations
- internal tools
- agentic systems
- custom product integrations
How The API Works
Every integration follows the same basic lifecycle:Authentication
All public endpoints use a workspace-scoped API key.Parameter Types
When providing inputs to an app, each parameter has a type:| Type | What to send | Example |
|---|---|---|
text | A string value | "A product on white background" |
options | One of the listed options | "Bright" (from options list) |
url | A publicly accessible URL | "https://example.com/photo.jpg" |
options parameters, send the label, not an internal ID or hidden value.
Asynchronous By Design
Apps are executed asynchronously. That means:- the run endpoint returns quickly with an
executionId - the execution may take seconds or many minutes depending on the workflow
- your integration should poll for the final result
queuedrunningcompletedfailed
For AI Agents
If you’re building with Claude Code, Cursor, or other AI-powered tools, fetch the machine-readable API spec at/llms.txt. This gives your agent everything it needs to discover apps, run them, and handle results — in a format optimized for LLMs.