🦞 Now compatible with OpenClaw — drop-in lobster-grade AI for all your automations. Get started
Back

Integration Guides

Connect Standard Compute to your automation stack in minutes. Our OpenAI-compatible API works out of the box with n8n, Make, Zapier, and any tool that supports the OpenAI format.

Why Standard Compute Works with Any Tool

Standard Compute exposes an OpenAI-compatible API, which means any platform that can talk to the OpenAI API can talk to Standard Compute. You only need to change two things: the base URL and the API key. Everything else — request format, response structure, streaming — stays the same.

Below you will find step-by-step integration guides for the three most popular automation platforms. Each guide assumes you already have a Standard Compute account and an API key. If you do not, see the Getting Started page first.

n8n Integration

n8n has a built-in OpenAI node that makes integration straightforward. Open your n8n workflow and add (or edit) an OpenAI node. In the node's credential settings, find the Base URL field and replace the default OpenAI URL with https://api.stdcmpt.com/v1. Paste your Standard Compute API key into the API Key field.

That is the entire setup. Every existing workflow that uses the OpenAI node will now route through Standard Compute. You can use the same model aliases ("fast", "standard", "premium") in the Model field, or leave your existing model names — the API will map them to the appropriate tier automatically.

If you use n8n's AI Agent or LangChain sub-nodes, the same credential swap applies. Any node that references your OpenAI credentials will pick up the new base URL.

A ready-to-import n8n workflow template is available in your Dashboard under Quick Start Guides. It includes a sample chat completion node with the correct configuration already applied.

Make (Integromat) Integration

Make does not have a native Standard Compute module, but the HTTP module works perfectly. Create a new scenario (or edit an existing one) and add an HTTP "Make a request" module.

Set the URL to https://api.stdcmpt.com/v1/chat/completions. Set the Method to POST. Under Headers, add two entries: Authorization with the value Bearer sk-your-api-key, and Content-Type with the value application/json.

In the Request Body, switch to RAW and enter your JSON payload. A minimal example: {"model": "standard", "messages": [{"role": "user", "content": "Summarize this text: {{1.text}}"}]}. Replace {{1.text}} with a mapped variable from a previous module in your scenario.

The response will be parsed as JSON automatically. Map choices[0].message.content to downstream modules to use the model's output in the rest of your scenario. You can find a detailed walkthrough with screenshots in the Dashboard under Quick Start Guides.

Zapier Integration

In Zapier, use the "Webhooks by Zapier" action (available on paid Zapier plans) to call the Standard Compute API. Select "Custom Request" as the action event.

Set the Method to POST and the URL to https://api.stdcmpt.com/v1/chat/completions. Under Headers, add Authorization: Bearer sk-your-api-key and Content-Type: application/json.

In the Data field, build your JSON body. You can reference data from trigger steps using Zapier's field mapping. For example: {"model": "standard", "messages": [{"role": "user", "content": "Draft a reply to this email: [Email Body from trigger]"}]}.

After the Zap runs, the response data is available as fields you can map into subsequent actions — send an email, update a CRM record, post to Slack, or anything else in your workflow. A step-by-step guide with screenshots is available in the Dashboard under Quick Start Guides.

Other HTTP Clients and SDKs

Any HTTP client or SDK that supports the OpenAI API format will work with Standard Compute. For the official OpenAI Python SDK, initialize with: client = OpenAI(base_url="https://api.stdcmpt.com/v1", api_key="sk-your-api-key"). For the Node SDK, pass baseURL: "https://api.stdcmpt.com/v1" to the constructor.

For LangChain, set the openai_api_base (Python) or OPENAI_API_BASE environment variable to https://api.stdcmpt.com/v1 and provide your Standard Compute key. LangChain's ChatOpenAI class will work without any other changes.

For direct HTTP calls (curl, fetch, Axios, or any other client), send a POST request to https://api.stdcmpt.com/v1/chat/completions with the Authorization and Content-Type headers, and a JSON body containing model and messages.

Troubleshooting

If you receive a 401 Unauthorized error, double-check that your API key is correct and that the Authorization header is formatted as Bearer sk-your-api-key (with a space between Bearer and the key).

If you receive a 403 Forbidden error, the model tier you requested may not be included in your plan. Try using the "fast" alias or upgrade your plan from the Dashboard.

If you experience timeouts, make sure your HTTP client or automation platform allows enough time for the response. Top tier models can take up to 5 seconds for longer completions. Set your timeout to at least 30 seconds to be safe.

For any other issues, contact us at contact@standardcompute.com with your request ID (returned in the x-request-id response header) and we will investigate.