Unified API
OpenAI-compatible access layer
One integration surface for model traffic.
Unified model & compute settlement platform
StellarComputerX unifies model access, GPU compute, routing, and Compute Credit settlement in one product platform. One API key. One base URL. Platform-hosted open-source routes that can graduate into reserved capacity.
Unified API
OpenAI-compatible access layer
One integration surface for model traffic.
Hosted supply
Approved open-model deployments
Capacity is operated, not just listed.
Credit ledger
Usage and reservations reconciled
Settlement stays attached to traffic.
Unified runtime contract
One endpoint. Multiple hosted routes.
Base URL
api.stellarcomputerx.com/v1
Auth
One API key across all hosted models
Routing
Alias-first, provider-agnostic, failover ready
Settlement
Compute Credit attached to usage and reservations
Live request
`scx/llama-3.3-70b-instruct` routed to warm hosted capacity
Observed output
Latency, tokens, and Credit posted to the usage ledger
01
Integrate once
Use the OpenAI-compatible base URL and issue one scoped key per environment.
02
Route by alias
Keep customer-facing model names stable while supply and fallback lanes evolve.
03
Operate supply
Run open models on vetted GPU providers, warm pools, or reserved deployments.
04
Settle usage
Attach token usage, reservations, and invoices to one Compute Credit ledger.
Routing, supply, and settlement
Unlike simple API aggregators, StellarComputerX operates model routes, hosted supply, and settlement accounting as one platform layer. That means teams can integrate once while keeping runtime control, billing visibility, and production posture in the same surface.
API contract
One endpoint, one key model, OpenAI-compatible request shape.
Supply posture
Approved model templates run on observable GPU capacity.
Operating control
Routing policy, evaluation evidence, and spend guardrails live together.
Platform control objects
Model registry
Open-source routes, context windows, and deployment posture all surface inside one contract.
Routing engine
Traffic can move between warm hosted supply and dedicated pools without changing the customer-facing API.
Credit ledger
Usage, reservations, and billing are visible as one operating layer instead of separate tools.
Enterprise posture
Procurement, dedicated capacity, and reliability reporting fit the same product story.
Compute Credit
Compute Credits behave like infrastructure finance, not a token gimmick. Teams can view spend, reserve capacity, and reconcile real usage into a standard operating budget.
Settlement map
Usage → Credits → Invoice
Hosted open-source supply
| Model | Context | TTFT | Price | Status |
|---|---|---|---|---|
| scx/deepseek-r1-32b | 128K | Balanced | 1.45 Credit / 1K output tokens | Live |
| scx/deepseek-r1-70b | 128K | Deep | 1.95 Credit / 1K output tokens | Live |
| scx/deepseek-v3 | 128K | Fast | 0.82 Credit / 1K output tokens | Live |
| scx/qwen3-235b-a22b | 128K | Deep | 2.20 Credit / 1K output tokens | Live |
Production inference
Built for routing policies, reserved capacity, procurement, and clear operating posture instead of vague AI platform promises.
No minimums
Start small, scale seamlessly.
Volume discounts
The more you use, the less you pay.
Reserved capacity
Guaranteed capacity for critical workloads.
Enterprise support
SLA-backed support and architecture guidance.