How Standard Compute protects your API keys, data, and requests across every layer of the infrastructure.
Every API key generated through Standard Compute is encrypted at rest using AES-256-GCM, an authenticated encryption algorithm that provides both confidentiality and integrity guarantees. The encryption key is derived per-account and stored separately from the encrypted key material.
Plaintext API keys are shown to the user exactly once — at the moment of creation. After that, only an encrypted blob and the last four characters (for identification purposes) are retained. There is no mechanism for us or any administrator to recover the full plaintext key after the creation screen is dismissed.
When an API request arrives, we validate it by computing an HMAC-SHA256 hash of the submitted key and comparing it against the stored hash for the account. This allows us to authenticate requests without ever decrypting the stored key during normal operation.
The HMAC comparison uses a constant-time equality check to prevent timing-based side-channel attacks. Failed authentication attempts are rate-limited to mitigate brute-force key guessing.
All communication between your automation platform and the Standard Compute API occurs over TLS 1.2 or higher. Plaintext HTTP connections to https://api.stdcmpt.com/v1 are rejected — we do not downgrade or redirect, the connection is simply refused.
Our TLS configuration follows Mozilla's "Intermediate" compatibility profile, supporting modern cipher suites while maintaining compatibility with current versions of n8n, Make, Zapier, and all major HTTP client libraries.
Our database layer enforces row-level security (RLS) policies so that every query is scoped to the authenticated user's own data. Even if an application-level bug were to construct an overly broad query, the database engine itself would filter results to only the rows belonging to the requesting account.
RLS policies are defined declaratively in the database schema, reviewed during every migration, and tested with automated integration tests that verify cross-account data isolation.
Standard Compute does not log, store, or inspect the content of prompts you send or responses you receive through the API. Request metadata — such as timestamp, model identifier, and token count — is recorded for usage tracking and fair-use enforcement, but the actual payload is never written to disk or retained in memory beyond the lifetime of the request.
This applies to all plans. There is no analytics or debugging mode that enables prompt logging. If you need to debug prompt content, you should capture it on your side before it reaches our API.
Requests routed to upstream LLM providers (such as OpenAI, Anthropic, and Google) are transmitted over provider-specific TLS connections using dedicated service credentials. Each provider connection is isolated — credentials for one provider cannot be used to access another.
We evaluate the security posture of each upstream provider before integration, including their data handling policies, SOC 2 compliance status, and whether they use customer data for model training. We only integrate providers that commit to not training on API traffic by default.
If you discover a security vulnerability in Standard Compute, we encourage you to report it responsibly. Please email contact@standardcompute.com with a detailed description of the vulnerability, steps to reproduce it, and any supporting evidence.
We commit to acknowledging receipt within 2 business days, providing an initial assessment within 7 business days, and keeping you informed of remediation progress. We will not take legal action against researchers who report vulnerabilities in good faith and do not access other users' data or disrupt the service.