C

CUDO Compute

Active

Sustainable GPU cloud with competitive pricing and global infrastructure

cudocompute.com · Founded 2020 · London, UK · Verified: 2026-03-06
7.5
Overall
7
Ease of Use
10
Pricing
8
GPU Variety
5
Enterprise

GPU Pricing

GPU ModelVRAMSpot $/hrOn-demand $/hrAvailable
H100 PCIe80GB$2.45 In Stock
H100 SXM80GB$1.79$1.79 In Stock
A100 PCIe80GB$1.35$1.35 In Stock
L40S48GB$0.87$0.87 In Stock
A800 PCIe80GB$0.8$0.8 In Stock
RTX A600048GB$0.45 In Stock
A4048GB$0.39 In Stock
RTX A500024GB$0.35 In Stock
V10032GB$0.19$0.19 In Stock
HGX B200 In Stock
GB200 NVL72 In Stock
H200 SXM141GB In Stock
B100 In Stock
RTX 4000 SFF Ada20GB In Stock
RTX A400016GB In Stock
MI250/300128GB In Stock

Features

Api
Docker
Jupyter
Kubernetes
Multi Gpu
Persistent Storage
Reserved Instances
Soc2 Compliant
Spot Instances

Billing & Payment

Billing Granularity

Per-Hour

Payment Methods

Credit-Card, Crypto

CUDO Compute

CUDO Compute is a London-based GPU cloud that launched in 2020 with a sustainability angle: they aim to source compute from underutilized data centers and renewable energy where possible. That pitch has a real audience, but what actually keeps people coming back is the pricing — CUDO consistently ranks among the most competitively priced providers for H100 SXM and A100 workloads, which is no small feat in a crowded market.

The platform covers a solid range of hardware from entry-level RTX A5000s up through H100 SXM and PCIe variants, with AMD MI250/300 and next-gen Blackwell hardware (B100, HGX B200, GB200 NVL72) listed as coming soon. The depth of their current catalog — nine GPU types with live pricing — makes them a genuine option for teams that want flexibility without jumping between five different providers.

Why CUDO Compute stands out

The headline story is value on high-end hardware. The H100 SXM pricing is particularly striking — it’s noticeably cheaper than most comparable providers, and spot pricing is available on that SKU too. If you’re running training jobs with some tolerance for interruption, that combination is hard to beat. The A100 PCIe and L40S tiers also offer spot, which extends the cost advantages down the stack.

CUDO also supports Kubernetes natively, making it one of the less obvious but genuinely useful choices for teams running orchestrated workloads rather than one-off Jupyter sessions.

Pros

  • Among the most competitively priced H100 SXM available anywhere
  • Spot pricing on H100 SXM, A100 PCIe, L40S, A800, and V100
  • Kubernetes support for orchestrated workloads
  • Docker and persistent storage included
  • Crypto payment accepted alongside credit card
  • Good GPU variety across price tiers
  • Sustainability-focused infrastructure sourcing

Cons

  • No Jupyter notebook environment out of the box
  • Not SOC 2 compliant — not suitable for regulated workloads
  • No reserved instances for long-term cost predictability
  • Several high-end GPUs (H200, B100, GB200 NVL72) listed but not yet priced or available
  • Ease-of-use scores suggest the platform has rough edges compared to more polished alternatives
  • Enterprise readiness is limited — probably not the right fit for large procurement teams

Getting started

  1. Visit CUDO Compute and create an account — credit card or crypto both work
  2. Browse the GPU marketplace and filter by the hardware tier you need
  3. Deploy a VM with Docker, or connect your Kubernetes cluster via their API
  4. Enable spot instances on supported SKUs (H100 SXM, A100, L40S) to maximize cost efficiency
  5. Set up persistent storage if your workflow requires data to survive instance restarts

Best for: ML engineers and researchers running training or inference workloads who prioritize cost efficiency over enterprise polish, especially those running Kubernetes or willing to tolerate spot interruptions in exchange for significantly lower H100 pricing.

See something wrong? Report a data issue · DM on X