Methodological rigor is the asset. A weekly publication that is right, week after week, becomes citable infrastructure for the industry. The historical data series itself becomes proprietary and increasingly valuable. Below is the procedure that produces the SVCI, in the precision needed to evaluate it.
The provider panel.
Nineteen GPU cloud providers across five tiers, selected to represent the most economically significant participants in the public-rate AI compute market.
| Tier | Type | Providers |
|---|---|---|
| Tier 1 | Hyperscale (measured) | AWS, Microsoft Azure, Google Cloud, Oracle Cloud Infrastructure |
| Tier 2 | Specialized neoclouds (measured) | CoreWeave, Lambda Labs, Nebius, Crusoe, Hyperstack |
| Tier 3 | Aggregator and marketplace (measured) | RunPod, Vast.ai, Paperspace, TensorDock |
| Tier 4 | Cost-positioned neoclouds (measured) | GMI Cloud, Cudo Compute, Spheron |
| Tier 5 | Inference-implied (derived) | Together AI, Fireworks AI, Replicate |
The measurement procedure.
Every Sunday evening, the Director of Operations executes data capture across the panel using a documented Provider Extraction Reference. For each provider, the on-demand hourly rate is captured for each tracked hardware configuration, normalized to a per-GPU-per-hour basis, with the source URL and UTC timestamp logged. Data flows into a structured workbook with automated calculation of medians, ranges, and week-over-week deltas.
Hardware configurations tracked: H100 SXM, H100 PCIe, H200, B200, and A100 80GB. Configurations are added to or removed from the panel only with a published methodology change disclosed in the issue it takes effect.
Where providers list only multi-GPU instance types (commonly 8-GPU nodes), the per-GPU-hour rate is computed by dividing the instance rate by the number of GPUs. This is one of the methodological differences between data sources in this market — most public aggregators do not disclose how they handle the conversion. Sorso View does.
The inference-implied tier.
For Tier 5 (Together AI, Fireworks AI, Replicate), token prices are captured and converted to a per-GPU-hour equivalent using a published formula:
Implied $/GPU-hour = (token price per million) × (throughput tokens/sec) × 3,600 ÷ 1,000,000
Reference model: Llama 3.1 70B in FP16 precision. Reference throughput: 1,000 tokens per second on H100 SXM. These assumptions are published in every issue. Updates to either reference are disclosed in the issue they take effect.
This is a derived figure, not a measured rate. It is labeled distinctly from measured tiers in every Index issue. Inference economics are a different mechanism than rental economics; the conversion gives buyers a comparable reference, not an equivalent.
What the Index does not measure.
Methodological rigor requires honest scope limits. The Sorso View Compute Index does not measure:
- Private contract pricing. Confidential and customer-specific. Often materially different from posted rates.
- Realized enterprise rates. Derivable from public financial filings but not directly observable at weekly cadence.
- Committed-use and reserved pricing. Terms vary across buyers, contract length, prepayment structure, and capacity guarantees in ways that prevent a stable median.
Major operators who do not publish on-demand rates — including Nscale, IREN, IBM Cloud's enterprise tier, and Fluidstack — are acknowledged in editorial coverage but excluded from the Index calculation. The exclusion is methodological, not editorial. When their pricing surfaces in earnings, regulatory filings, or public statements, it is covered. It does not enter the panel until on-demand rates are publicly posted.
The four commitments.
Every issue of the Index — and every editorial piece around it — meets four published standards:
Sourcing. Every factual claim is traceable to a public source. No anonymous tips. No unsourced estimates. Where an estimate is necessary, it is labeled distinctly.
Consistency. Same methodology, same panel, same definitions, every week. Methodology changes are disclosed in the issue they take effect — and historical data is recomputed where the change makes prior figures comparable.
Distinction. Measured data and derived data are labeled separately. Readers always know which they are reading. The inference-implied tier is the most prominent example, but the same rule applies throughout.
Correction. When errors occur, they are corrected openly in the next issue, with explanation of what was wrong and how the error occurred. The corrections page maintains a permanent public record. Read the corrections →
What this means for readers.
The procedure above is designed to produce a weekly number that is defensible to a board, citable in a procurement memo, useful in a research note, and credible to a regulator. The methodology is published in advance of any issue using it. The panel is published. The hardware list is published. The inference formula is published. There is no hidden methodology.
The Index does not aspire to capture every transaction in this market. It captures the most transparent and continuously measurable layer of it — the on-demand, publicly-quoted, single-instance rate — and reports honestly on what surrounds that layer in editorial coverage. Buyers transacting at scale have a reference point. Analysts have a benchmark. Operators have a record.
The first SVCI publishes the week of our launch. Subscribe to receive it →