_
Compare
Independent GPU cloud pricing, updated daily. 9 providers, 169+ GPUs compared. Free, no signup.
9
GPU Providers
169
GPU Models
346
LLM Models
Daily
Price Updates
_
Independent GPU cloud pricing, updated daily. 9 providers, 169+ GPUs compared. Free, no signup.
9
GPU Providers
169
GPU Models
346
LLM Models
Daily
Price Updates
| GPU Model | VRAM | Providers | From |
|---|---|---|---|
| H100 SXM 80GB | 80GB | 7 | $1.79 /hr |
| H200 SXM 141GB | 141GB | 7 | $2.30 /hr |
| Blackwell B200 | 192GB | 4 | $4.69 /hr |
| L4 | 24GB | 3 | $0.17 /hr |
| RTX A5000 | 24GB | 3 | $0.27 /hr |
| A100 80GB | 80GB | 2 | $1.29 /hr |
| A100 PCIe 80GB | 80GB | 2 | $1.35 /hr |
| A40 | 48GB | 2 | $0.39 /hr |
| Model | Developer | Params | Min VRAM | Compatible GPUs |
|---|---|---|---|---|
| DeepSeek-R1-0528-Qwen3-8B | deepseek-ai | 8.19B | 5.4GB | 163 |
| gemma-1.1-7b-it | 8.54B | 5.63GB | 163 | |
| GLM-4.7-Flash-MLX-8bit | lmstudio-community | 8.43B | 5.57GB | 163 |
| Hermes-2-Pro-Llama-3-8B | NousResearch | 8.03B | 5.3GB | 163 |
| Hermes-2-Theta-Llama-3-8B | NousResearch | 8.03B | 5.3GB | 163 |
| Hermes-3-Llama-3.1-8B | NousResearch | 8.03B | 5.3GB | 163 |
| LFM2-8B-A1B | LiquidAI | 8.34B | 5.5GB | 163 |
| Llama 3.1 8B | Meta | 8.00B | 5.5GB | 163 |
| Llama-3.1-8B-Instruct-FP8 | nvidia | 8.03B | 5.3GB | 163 |
| Llama-3.1-Tulu-3-8B-SFT | allenai | 8.03B | 5.3GB | 163 |
| GPU Model | VRAM | Atlas Cloud | Bitdeer AI | Clore.ai | CUDO Compute | GMI Cloud | Hostrunway | Jarvis Labs | Nebius AI | RunPod |
|---|---|---|---|---|---|---|---|---|---|---|
| H100 SXM 80GB | 80GB | $2.95* | $2.36* | — | $1.79 | $2.1* | — | $2.99* | $2* | $2.69* |
| H200 SXM 141GB | 141GB | $3.5* | $2.51* | — | — | $2.5* | — | $30.4* | $2.3* | $4.31* |
| Blackwell B200 | 192GB | — | $4.69* | — | — | — | — | — | — | $4.99* |
| L4 | 24GB | — | $0.17* | — | — | — | $0.99* | — | — | $0.39* |
| RTX A5000 | 24GB | — | — | $0.04* | $0.35* | — | — | $0.49* | — | $0.27* |
| A100 80GB | 80GB | — | $1.5* | — | — | — | — | $1.29* | — | — |
| A100 PCIe 80GB | 80GB | — | — | — | $1.35 | — | — | — | — | $1.39* |
| A40 | 48GB | — | — | — | $0.39* | — | — | — | — | $0.4* |
| RTX A6000 | 48GB | — | — | — | — | — | — | $0.79* | — | $0.86 |
| RTX 3090 | 24GB | — | $0.7* | $0.03* | — | — | — | — | — | $0.46* |
| RTX 4090 | 24GB | — | $1.1* | $0.07* | — | — | — | — | — | $0.59* |
| H100 PCIe 80GB | 80GB | — | — | — | $2.45* | — | — | — | — | $2.39* |
| L40S 48GB | 48GB | — | — | — | $0.87 | — | — | — | — | $0.86* |
| RTX 6000 Ada Generation | 48GB | — | — | — | — | — | — | $0.99* | — | $0.77* |
| RTX A6000 | 48GB | — | — | — | $0.45* | — | — | — | — | $0.49* |
| Tesla V100 | 32GB | — | $0.45* | — | $0.19 | — | — | — | — | — |
| AMD Instinct MI250X | 128GB | — | — | — | — | — | — | — | — | — |
| A100 40GB | 40GB | — | $1.1* | — | — | — | — | — | — | — |
| A30 | 24GB | — | — | — | — | — | $1.05* | — | — | — |
| A4000 | 16GB | — | — | — | — | — | — | — | — | $0.4 |
All prices are per hour. Spot prices shown where available — prices marked with * are on-demand. The lowest price in each row is highlighted.
GPU cloud pricing changes daily. Spot prices fluctuate hourly. The same GPU can vary by 2–3x between providers depending on availability and billing type. As of March 2026, nodepedia tracks real pricing across 9 GPU cloud providers and 169 GPU models so you can find the cheapest option for your workload — try the Cost Calculator to estimate your spend.
An AI agent extracts pricing from provider websites daily. No data is self-reported by providers. Every price is pulled directly from the source, validated against historical patterns, and flagged if it looks anomalous. You get the same prices you'd see if you visited each provider yourself — just all in one place.
ML engineers comparing cloud options for training runs. Startups evaluating which GPU provider fits their budget. Researchers who need a specific GPU and want to find the lowest price. Anyone renting cloud GPUs who wants to stop overpaying by checking one site instead of a dozen.