T

TensorDock

Active

Affordable GPU cloud with bare-metal VMs for ML and rendering workloads

tensordock.com · Founded 2021 · Boston, MA · Verified: 2026-03-26
5
Overall
6
Ease of Use
3
Pricing
8
GPU Variety
3
Enterprise

GPU Pricing

GPU ModelVRAMSpot $/hrOn-demand $/hrTrendAvailable
H100 SXM5 80GB80GB$2.25 In Stock
A100 SXM4 80GB$1.8 In Stock
A100 PCIe 80GB80GB$1.5 In Stock
V100 SXM2 16GB$0.95 In Stock
L40 48GB48GB$0.95 In Stock
RTX 4090 24GB24GB$0.75 In Stock
RTX 3090 24GB24GB$0.75 In Stock
RTX 6000 Ada 48GB48GB$0.75 In Stock
RTX A6000 48GB48GB$0.75 In Stock
RTX A4000 16GB16GB$0.1 In Stock
A100 SXM4 80GB80GB$1.8 In Stock

Features

Api
Docker
Jupyter
Kubernetes
Multi Gpu
Persistent Storage
Reserved Instances
Soc2 Compliant
Spot Instances

Billing & Payment

Billing Granularity

Per-Hour

Payment Methods

Credit-Card, Crypto

TensorDock sits in an interesting corner of the GPU cloud market: it’s among the most competitively priced options you’ll find, which makes it worth knowing about even if the experience isn’t exactly polished.

The platform operates more like a marketplace than a traditional managed cloud. You’re essentially getting access to GPU capacity sourced from distributed hosts, which is a big part of why the pricing can undercut the major players so significantly. If your primary concern is cost and you’re comfortable navigating a less hand-holding environment, TensorDock deserves a look.

Why TensorDock stands out

The pricing competitiveness is the headline here — TensorDock consistently ranks among the cheapest GPU options available. For researchers, students, or budget-conscious teams running experiments that don’t need enterprise-grade SLAs, that matters a lot. When you’re burning through dozens of training runs, the difference between TensorDock’s rates and a premium provider can add up fast.

Pros

  • Highly competitive pricing — among the most affordable GPU options in the market
  • Low barrier to entry — no long-term commitment required
  • Good for cost-sensitive workloads — experimental runs, batch jobs, and one-off tasks where uptime guarantees aren’t critical

Cons

  • Limited ease of use — the platform requires more technical comfort than managed alternatives like Lambda Labs or CoreWeave
  • No managed features — no Jupyter notebooks, no built-in Docker management, no Kubernetes orchestration out of the box
  • Not enterprise-ready — no SOC2 compliance, no SLA guarantees, limited support infrastructure
  • Sparse GPU variety — fewer GPU model options compared to broader marketplaces
  • No API access — automation and programmatic provisioning aren’t available
  • No persistent storage — you’ll need to handle data persistence yourself
  • Billing granularity is unclear — budget planning requires more digging than with providers who publish clean per-minute or per-hour rates

Getting started

  1. Visit TensorDock's website and create an account
  2. Browse available GPU instances and compare configurations for your workload
  3. Provision a VM and SSH in — there’s no managed notebook environment, so bring your own setup scripts
  4. Configure your environment manually (Python, CUDA, PyTorch/TensorFlow, etc.)
  5. Monitor your spend carefully given the unclear billing granularity

TensorDock is in beta status, so expect rough edges, occasional instability, and a UX that prioritizes cost over convenience. That’s not necessarily a dealbreaker — it’s a tradeoff worth making for the right use case.

Best for: Cost-conscious developers and researchers who are comfortable with manual server setup and don’t need managed tooling, persistent storage, or enterprise compliance.

See something wrong? Report a data issue · DM on X