T

TensorDock

Active

Affordable GPU cloud with bare-metal VMs for ML and rendering workloads

tensordock.com · Founded 2021 · Boston, MA · Verified: 2026-05-10
8.25
Overall
8
Ease of Use
8
Pricing
9
GPU Variety
8
Enterprise

GPU Pricing

GPU ModelVRAMSpot $/hrOn-demand $/hrTrendAvailable
H100 SXM5 80GB80GB$52.12 In Stock
A100 SXM4 80GB$1.8 In Stock
A100 PCIe 80GB80GB$1.5 In Stock
V100 SXM2 16GB$1.23 In Stock
L40 48GB48GB$1.07 In Stock
RTX 409024GB$0.25 In Stock
RTX 309024GB$0.19 In Stock
RTX 6000 ADA48GB$0.7 In Stock
RTX A600048GB$0.4 In Stock
RTX A400016GB$0.07 In Stock
A10080GB$0.85 In Stock
A10040GB$0.85 In Stock
V100 SXM2 16GB$0.5 In Stock
H100 PCIE 80GB80GB$1.99 In Stock
Tesla V10032GB$0.19 In Stock
Quadro 6000 48GB48GB$0.32 In Stock
L4 GPU24GB$0.23 In Stock
RTX 5000 ADA$0.55 In Stock
RTX 509032GB$0.46 In Stock
RTX A500024GB$0.21 In Stock
RTX 408016GB$0.14 In Stock

Features

Api
Docker
Jupyter
Kubernetes
Multi Gpu
Persistent Storage
Reserved Instances
Soc2 Compliant
Spot Instances

Billing & Payment

Billing Granularity

Per-Hour

Payment Methods

Credit-Card, Crypto

TensorDock sits in an interesting corner of the GPU cloud market: it’s among the most competitively priced options you’ll find, which makes it worth knowing about even if the experience isn’t exactly polished.

The platform operates more like a marketplace than a traditional managed cloud. You’re essentially getting access to GPU capacity sourced from distributed hosts, which is a big part of why the pricing can undercut the major players so significantly. If your primary concern is cost and you’re comfortable navigating a less hand-holding environment, TensorDock deserves a look.

Why TensorDock stands out

The pricing competitiveness is the headline here — TensorDock consistently ranks among the cheapest GPU options available. For researchers, students, or budget-conscious teams running experiments that don’t need enterprise-grade SLAs, that matters a lot. When you’re burning through dozens of training runs, the difference between TensorDock’s rates and a premium provider can add up fast.

Pros

  • Highly competitive pricing — among the most affordable GPU options in the market
  • Low barrier to entry — no long-term commitment required
  • Good for cost-sensitive workloads — experimental runs, batch jobs, and one-off tasks where uptime guarantees aren’t critical

Cons

  • Limited ease of use — the platform requires more technical comfort than managed alternatives like Lambda Labs or CoreWeave
  • No managed features — no Jupyter notebooks, no built-in Docker management, no Kubernetes orchestration out of the box
  • Not enterprise-ready — no SOC2 compliance, no SLA guarantees, limited support infrastructure
  • Sparse GPU variety — fewer GPU model options compared to broader marketplaces
  • No API access — automation and programmatic provisioning aren’t available
  • No persistent storage — you’ll need to handle data persistence yourself
  • Billing granularity is unclear — budget planning requires more digging than with providers who publish clean per-minute or per-hour rates

Getting started

  1. Visit TensorDock's website and create an account
  2. Browse available GPU instances and compare configurations for your workload
  3. Provision a VM and SSH in — there’s no managed notebook environment, so bring your own setup scripts
  4. Configure your environment manually (Python, CUDA, PyTorch/TensorFlow, etc.)
  5. Monitor your spend carefully given the unclear billing granularity

TensorDock is in beta status, so expect rough edges, occasional instability, and a UX that prioritizes cost over convenience. That’s not necessarily a dealbreaker — it’s a tradeoff worth making for the right use case.

Best for: Cost-conscious developers and researchers who are comfortable with manual server setup and don’t need managed tooling, persistent storage, or enterprise compliance.

See something wrong? Report a data issue · DM on X