Unified model & compute settlement platform

One API.Every model.Real infrastructure.

StellarComputerX unifies model access, GPU compute, routing, and Compute Credit settlement in one product platform. One API key. One base URL. Platform-hosted open-source routes that can graduate into reserved capacity.

Unified API

OpenAI-compatible access layer

One integration surface for model traffic.

Hosted supply

Approved open-model deployments

Capacity is operated, not just listed.

Credit ledger

Usage and reservations reconciled

Settlement stays attached to traffic.

Unified runtime contract

One endpoint. Multiple hosted routes.

Production
OpenAI-compatibleOne API key

Base URL

api.stellarcomputerx.com/v1

01

Auth

One API key across all hosted models

02

Routing

Alias-first, provider-agnostic, failover ready

03

Settlement

Compute Credit attached to usage and reservations

04

Live request

`scx/llama-3.3-70b-instruct` routed to warm hosted capacity

Observed output

Latency, tokens, and Credit posted to the usage ledger

01

Integrate once

Use the OpenAI-compatible base URL and issue one scoped key per environment.

02

Route by alias

Keep customer-facing model names stable while supply and fallback lanes evolve.

03

Operate supply

Run open models on vetted GPU providers, warm pools, or reserved deployments.

04

Settle usage

Attach token usage, reservations, and invoices to one Compute Credit ledger.

Routing, supply, and settlement

Routing, supply, and settlement. Unified.

Unlike simple API aggregators, StellarComputerX operates model routes, hosted supply, and settlement accounting as one platform layer. That means teams can integrate once while keeping runtime control, billing visibility, and production posture in the same surface.

API contract

One endpoint, one key model, OpenAI-compatible request shape.

Supply posture

Approved model templates run on observable GPU capacity.

Operating control

Routing policy, evaluation evidence, and spend guardrails live together.

Platform control objects

Model registry

Open-source routes, context windows, and deployment posture all surface inside one contract.

Routing engine

Traffic can move between warm hosted supply and dedicated pools without changing the customer-facing API.

Credit ledger

Usage, reservations, and billing are visible as one operating layer instead of separate tools.

Enterprise posture

Procurement, dedicated capacity, and reliability reporting fit the same product story.

Compute Credit

Settlement that moves at inference speed.

Compute Credits behave like infrastructure finance, not a token gimmick. Teams can view spend, reserve capacity, and reconcile real usage into a standard operating budget.

  • Real-time token settlement across routed traffic
  • Reserved capacity accounting for critical workloads
  • Budget visibility for engineering and procurement
  • One ledger spanning API usage and dedicated deployments

Settlement map

Usage → Credits → Invoice

Control plane
Token usage ledgerUsage events
Reserved capacityCommitments
Invoice exportFinance records
Provider payoutSupply side

Hosted open-source supply

Hosted open-source supply, ready on demand.

View all models
ModelContextTTFTPriceStatus
scx/deepseek-r1-32b128KBalanced1.45 Credit / 1K output tokensLive
scx/deepseek-r1-70b128KDeep1.95 Credit / 1K output tokensLive
scx/deepseek-v3128KFast0.82 Credit / 1K output tokensLive
scx/qwen3-235b-a22b128KDeep2.20 Credit / 1K output tokensLive

Production inference

Infrastructure-grade inference, from day one.

Built for routing policies, reserved capacity, procurement, and clear operating posture instead of vague AI platform promises.

No minimums

Start small, scale seamlessly.

Volume discounts

The more you use, the less you pay.

Reserved capacity

Guaranteed capacity for critical workloads.

Enterprise support

SLA-backed support and architecture guidance.