A record of what we have shipped, changed, and improved. We update this page whenever we make a meaningful change to the Standard Compute platform.
We have updated our fair-use policy to provide clearer guidance on acceptable usage patterns. The updated policy specifies per-minute rate limits for each plan tier: 60 requests per minute on Starter, 200 on SMB, and 500 on Agency.
We also added rate limit response headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset) to every API response so you can build smarter retry logic into your workflows. These headers are now live on all endpoints.
The full updated policy is available on the Terms and Conditions page. If you have questions about how the policy applies to your specific use case, reach out to contact@standardcompute.com.
We are excited to introduce two new plan tiers designed for growing teams and agencies. The SMB plan adds access to Mid tier models (GPT-4o-mini class) with higher rate limits, perfect for teams running dozens of automated workflows daily.
The Agency plan unlocks our full model catalog, including Top tier models (GPT-4o and Claude Sonnet class) with priority routing and the highest rate limits. It is designed for agencies managing AI-powered workflows on behalf of multiple clients.
Existing Starter plan subscribers can upgrade at any time from the Dashboard. Your API keys and workflows will continue to work without changes — you will simply gain access to additional model tiers.
Standard Compute now officially supports Make (formerly Integromat) and Zapier alongside our existing n8n integration. While the API has always been compatible with any HTTP client, we have added dedicated quick start guides, ready-to-use templates, and troubleshooting documentation for both platforms.
The Make guide walks you through setting up the HTTP module with the correct headers and JSON body. The Zapier guide covers the Webhooks by Zapier action with field mapping for dynamic prompts. Both guides are available in your Dashboard under Quick Start Guides.
We also published an Integrations page on the website with detailed step-by-step instructions for all three platforms.
We have rolled out intelligent model routing across all tiers. You can now use simple aliases — "fast", "standard", and "premium" — instead of specifying exact model names. The system will automatically route your request to the best available model within that tier based on current load and provider availability.
This also means automatic failover. If one upstream provider experiences an outage, your requests are seamlessly rerouted to an alternative model at the same quality level. The actual model used is always indicated in the model field of the API response.
The Dashboard now includes a Quick Start section with interactive guides for getting your first API call working. Each guide includes copy-pasteable code snippets, a test button that sends a real request using your API key, and links to platform-specific templates.
We have also added a usage overview panel that shows your recent API activity, including request counts, average response times, and the model tiers used. This data is refreshed every few minutes and is intended to help you monitor your integration health at a glance.
Standard Compute is live. We are launching with an OpenAI-compatible API that provides unlimited AI completions for a flat monthly fee. The initial release supports the chat completions endpoint with Base tier models, targeting automation teams that use n8n and similar workflow tools.
At launch, the service includes API key management with encryption and rotation, the Starter plan with Base tier access, and a free trial period for new accounts. The Dashboard is available for key management, usage monitoring, and account settings.
We built Standard Compute because we saw automation teams getting crushed by unpredictable per-token API bills. Our goal is simple: let you run as many AI-powered workflows as you need without worrying about cost. Welcome aboard.