If you're running AI-powered workflows in n8n, you're probably paying per token through OpenAI or Anthropic directly. Switching to Standard Compute takes about 90 seconds and requires zero code changes.
Here's the process: Open your n8n workflow, find any OpenAI or HTTP Request node that calls an LLM, and update two fields — the base URL to https://api.stdcmpt.com/v1 and the API key to your Standard Compute key.
That's it. Your existing prompts, parameters, and output parsing all work identically because our API is fully OpenAI-compatible. The /v1/chat/completions endpoint accepts the same request format and returns the same response structure.
Set the model name to "standardcompute" and our intelligent routing handles model selection automatically. Or if you prefer a specific model, you can still specify it by name.
Once connected, every AI call in your workflow is covered by your flat monthly plan. No more token counters, no more budget alerts, no more choosing between quality and cost.
