_

Compare

Independent GPU cloud pricing, updated daily. 9 providers, 169+ GPUs compared. Free, no signup.

9

GPU Providers

169

GPU Models

346

LLM Models

Daily

Price Updates

H100 SXM 80GB $1.79/hr (CUDO Compute) | H200 SXM 141GB $2.3/hr (Nebius AI) | RTX A5000 $0.04/hr (Clore.ai) | Blackwell B200 $4.69/hr (Bitdeer AI) | GeForce RTX 3090 $0.03/hr (Clore.ai) | GeForce RTX 4090 $0.07/hr (Clore.ai) | H100 SXM 80GB $1.79/hr (CUDO Compute) | H200 SXM 141GB $2.3/hr (Nebius AI) | RTX A5000 $0.04/hr (Clore.ai) | Blackwell B200 $4.69/hr (Bitdeer AI) | GeForce RTX 3090 $0.03/hr (Clore.ai) | GeForce RTX 4090 $0.07/hr (Clore.ai) |

Popular GPUs

All GPUs →
GPU ModelVRAMProvidersFrom
H100 SXM 80GB80GB7$1.79 /hr
H200 SXM 141GB141GB7$2.30 /hr
Blackwell B200192GB4$4.69 /hr
L424GB3$0.17 /hr
RTX A500024GB3$0.27 /hr
A100 80GB80GB2$1.29 /hr
A100 PCIe 80GB80GB2$1.35 /hr
A4048GB2$0.39 /hr

GPU Cloud Pricing

All Prices →
GPU ModelVRAMAtlas CloudBitdeer AIClore.aiCUDO ComputeGMI CloudHostrunwayJarvis LabsNebius AIRunPod
H100 SXM 80GB80GB$2.95*$2.36*$1.79$2.1*$2.99*$2*$2.69*
H200 SXM 141GB141GB$3.5*$2.51*$2.5*$30.4*$2.3*$4.31*
Blackwell B200192GB$4.69*$4.99*
L424GB$0.17*$0.99*$0.39*
RTX A500024GB$0.04*$0.35*$0.49*$0.27*
A100 80GB80GB$1.5*$1.29*
A100 PCIe 80GB80GB$1.35$1.39*
A4048GB$0.39*$0.4*
RTX A600048GB$0.79*$0.86
RTX 309024GB$0.7*$0.03*$0.46*
RTX 409024GB$1.1*$0.07*$0.59*
H100 PCIe 80GB80GB$2.45*$2.39*
L40S 48GB48GB$0.87$0.86*
RTX 6000 Ada Generation48GB$0.99*$0.77*
RTX A600048GB$0.45*$0.49*
Tesla V10032GB$0.45*$0.19
AMD Instinct MI250X128GB
A100 40GB40GB$1.1*
A3024GB$1.05*
A400016GB$0.4

All prices are per hour. Spot prices shown where available — prices marked with * are on-demand. The lowest price in each row is highlighted.

Data verified daily 9 GPU cloud providers 169 GPU models Independent pricing

Why Compare GPU Cloud Pricing?

GPU cloud pricing changes daily. Spot prices fluctuate hourly. The same GPU can vary by 2–3x between providers depending on availability and billing type. As of March 2026, nodepedia tracks real pricing across 9 GPU cloud providers and 169 GPU models so you can find the cheapest option for your workload — try the Cost Calculator to estimate your spend.

What You Can Do

How Data Is Collected

An AI agent extracts pricing from provider websites daily. No data is self-reported by providers. Every price is pulled directly from the source, validated against historical patterns, and flagged if it looks anomalous. You get the same prices you'd see if you visited each provider yourself — just all in one place.

Who Uses nodepedia?

ML engineers comparing cloud options for training runs. Startups evaluating which GPU provider fits their budget. Researchers who need a specific GPU and want to find the lowest price. Anyone renting cloud GPUs who wants to stop overpaying by checking one site instead of a dozen.

Frequently Asked Questions

What is the cheapest GPU cloud provider?
Pricing changes daily and depends on the GPU model, billing type (spot vs on-demand), and availability. There is no single cheapest provider — it varies by workload. Use the Cost Calculator to compare current pricing across all 9 providers tracked by nodepedia.
How much does it cost to rent an H100 GPU?
H100 pricing varies significantly by provider and billing type. Spot instances are cheaper but can be interrupted, while on-demand instances guarantee availability at a premium. Check the H100 pricing page for current rates across all tracked providers.
What GPU do I need to run an LLM locally?
It depends on the model size, quantization level, and whether you need training or inference. A 7B parameter model at Q4 quantization fits on a 6 GB GPU, while a 70B model may need 40+ GB of VRAM. Use the Workload Recommender to match your model to compatible GPUs.
How does nodepedia collect pricing data?
An AI agent visits provider pricing pages daily and extracts current rates automatically. Prices are not self-reported by providers. Every data point is pulled directly from the source, validated against historical patterns, and flagged if anomalous.
Can I compare GPU cloud providers side by side?
Yes. nodepedia offers head-to-head comparisons for every provider pair, covering pricing, GPU availability, and billing options. You can also build custom comparisons with the Comparison Builder tool.
What is the difference between spot and on-demand GPU pricing?
Spot instances use spare GPU capacity at a discount but can be interrupted when demand rises. On-demand instances guarantee availability at a higher price. Spot pricing can be 50–80% cheaper, making it ideal for fault-tolerant workloads like training with checkpoints. The Cost Calculator shows both pricing types for easy comparison.
Is nodepedia free to use?
Yes. All pricing data, tools, and guides on nodepedia are completely free with no signup required. All data is independently collected — rankings and pricing are never influenced by providers.