Back

Changelog

A record of what we have shipped, changed, and improved. We update this page whenever we make a meaningful change to the Standard Compute platform.

March 8, 2026 — OpenClaw Integration

Standard Compute now supports OpenClaw, the open-source AI agent framework. A dedicated setup guide is available in the Dashboard with one-command installation for Mac, Linux, and Windows.

Set your model to "standardcompute" and our routing algorithm handles the rest — your agents get access to top-tier models from OpenAI, Anthropic, and xAI automatically.

February 20, 2026 — New Plan Structure

Four new tiers: Starter ($9/mo), Standard ($39/mo), Fast ($99/mo), and Turbo ($399/mo). Every plan includes unlimited LLM compute. Higher tiers get faster execution and priority scheduling.

All plans include a 3-day free trial. Existing subscribers have been migrated automatically — no changes needed.

February 14, 2026 — Grok 4 Now Available

xAI's Grok 4 has been added to the routing pool alongside GPT-5.1 Codex and Claude Opus 4. Strong at tasks that benefit from real-time data and multi-agent reasoning. Selected automatically when it's the best fit.

February 3, 2026 — Fair Use Policy Published

We have published a fair use policy explaining how we keep unlimited compute sustainable — intelligent batching, LLM routing, prompt compaction, and adaptive throttling. Full details on the Fair Use page.

January 20, 2026 — Claude Opus 4 Now Available

Anthropic's Claude Opus 4 is now in the routing pool. Excels at deep reasoning, multi-step planning, and long-running agentic tasks. Automatically selected for requests that benefit from extended context.

January 13, 2026 — Faster Response Times on Higher Tiers

Improved priority scheduling for Fast and Turbo plans. Adaptive throttling is now more granular — graduated slowdowns instead of hard cutoffs, resulting in smoother performance during peak demand.

January 6, 2026 — Make and Zapier Support

Standard Compute now officially supports Make and Zapier alongside n8n. All platforms work through the same OpenAI-compatible endpoints. Integration guides available on the Integrations page.

December 16, 2025 — Routing Algorithm Upgrade

Major upgrade to our LLM routing. The system now considers request complexity, token budget, and current provider load to pick the optimal model. Cross-provider failover is automatic — if one provider has issues, your requests are seamlessly rerouted.

December 9, 2025 — Smarter Infrastructure

Deployed intelligent batching and smart prompt compaction across the platform. Batching groups requests to improve GPU utilization. Compaction strips redundant tokens before execution — 12–18% savings on average, often over 20% for templated automation workflows.

These systems, together with LLM routing, are the core of how we deliver unlimited compute at a flat price.

November 18, 2025 — Service Launch

Standard Compute is live. Unlimited AI compute for a flat monthly price. OpenAI-compatible API with /v1/completions and /v1/responses endpoints.

We built this because automation teams were getting crushed by unpredictable per-token bills. Run as many AI-powered workflows as you need without worrying about cost. Welcome aboard.

November 4, 2025 — Open Beta Ends

The open beta period has concluded. All beta accounts have been migrated to the general availability platform with no downtime. API keys issued during beta remain valid — no action required.

Thank you to the 2,400+ teams who participated. Your feedback directly shaped the routing algorithm, throttling behavior, and pricing structure we shipped at launch.

October 21, 2025 — GPT-5 Added to Routing Pool

OpenAI's GPT-5 is now available through Standard Compute. The routing algorithm selects it automatically for tasks where it outperforms alternatives — particularly multi-modal reasoning and structured data extraction.

October 8, 2025 — Dashboard Preview

Early preview of the Standard Compute dashboard. Manage API keys, view request logs, and monitor usage — all from a single interface. The dashboard is available to all beta users at standardcompute.com/dashboard.

This is a first version. Billing management, team controls, and usage analytics are coming in later releases.

September 22, 2025 — Rate Limit Overhaul

Replaced the fixed rate-limit system with adaptive throttling. Instead of hard request caps, the system now gradually adjusts throughput based on real-time cluster load. During normal conditions, there are effectively no limits. During peak demand, lower-tier plans experience gentle slowdowns rather than hard rejections.

This eliminates the most common complaint from beta users — unexpected 429 errors during traffic spikes.

September 5, 2025 — n8n Integration Guide

Published a step-by-step guide for connecting Standard Compute to n8n workflows. Point any OpenAI-compatible node at our endpoint, paste your API key, and you're running. No custom modules or plugins needed.

August 19, 2025 — Cross-Provider Failover

Added automatic cross-provider failover. If a request to one LLM provider fails or times out, the system retries against a different provider transparently. No changes needed on your end — failed requests are rerouted in under 200ms.

This was the number one reliability request during beta. In internal testing it reduced user-visible error rates by over 60%.

August 4, 2025 — Claude 3.5 Sonnet Support

Anthropic's Claude 3.5 Sonnet is now in the routing pool. Particularly strong at code generation, technical writing, and instruction-following. The router favors it for automation prompts that require precise output formatting.

July 14, 2025 — Open Beta Launch

Standard Compute is now in open beta. Anyone can sign up, get an API key, and start sending requests. The beta is free — no credit card required. We're looking for feedback on latency, output quality, and the developer experience.

The core idea is simple: unlimited AI compute for a flat price. We handle model selection, infrastructure scaling, and cost optimization so you can focus on building.

June 30, 2025 — Closed Alpha Wrap-Up

The closed alpha has concluded after 12 weeks with 47 teams. Key outcomes: the routing algorithm now handles 14 model variants across 3 providers, average response latency is under 1.2 seconds, and the prompt compaction system is reducing token usage by 12–18% without measurable quality loss.

Every major issue surfaced during alpha has been addressed. We're preparing for open beta.

April 7, 2025 — Closed Alpha Begins

Standard Compute enters closed alpha with a small group of early partners. The initial release supports GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro through a single OpenAI-compatible endpoint.

The goal for this phase is to validate the core routing and compaction systems under real workloads before opening access more broadly.