V

Verda

Active

Affordable GPU cloud for AI/ML with H100s, A100s, and flexible instances

datacrunch.io · Founded 2020 · Tallinn, Estonia · Verified: 2026-03-09
6
Overall
7
Ease of Use
5
Pricing
7
GPU Variety
5
Enterprise

GPU Pricing

GPU ModelVRAMSpot $/hrOn-demand $/hrTrendAvailable
B300 SXM6 In Stock
GB200 In Stock
B200 SXM6192GB In Stock
H100 SXM580GB In Stock
A100 SXM480GB In Stock
RTX PRO 6000 In Stock

Features

Api
Docker
Jupyter
Kubernetes
Multi Gpu
Persistent Storage
Reserved Instances
Soc2 Compliant
Spot Instances

Billing & Payment

Billing Granularity

Per-Hour

Payment Methods

Credit-Card

Verda

Verda (operating at datacrunch.io) is a GPU cloud provider founded in 2020 out of Tallinn, Estonia. Built squarely for AI/ML practitioners, it focuses on delivering competitive access to high-end accelerators like H100s and A100s without the enterprise pricing that comes with the big hyperscalers. If you’re running training jobs on a budget and don’t need a sprawling feature set, Verda deserves a look.

The provider sits in an interesting spot: European-based with pricing that punches well above its weight class. For researchers, indie ML engineers, and small teams who are cost-sensitive but still need serious hardware, that combination is rare.


Why Verda stands out

Verda’s main differentiator is straightforward: it’s among the most competitively priced options for flagship-tier GPUs in the market. Most affordable GPU clouds either sacrifice hardware quality (older-gen cards, no NVLink) or reliability. Verda bets on offering newer hardware at aggressive rates, which makes it particularly appealing for long training runs where cost compounds quickly.

It also covers the practical bases — Jupyter notebooks for interactive work, Docker for portable environments, a REST API for programmatic access, and both spot and reserved instances so you can optimize spend across workload types. Persistent storage means you’re not re-uploading datasets every time you spin up an instance.


Pros

  • Highly competitive pricing — consistently among the cheapest options for H100 and A100 access
  • Spot and reserved instances — flexibility to trade cost for commitment depending on your workflow
  • Multi-GPU support — scale up distributed training jobs without jumping to a different platform
  • API access — automate instance management and integrate into ML pipelines
  • Jupyter + Docker — covers both interactive and containerized workflows out of the box
  • European data residency — useful for GDPR compliance or latency to EU-based teams

Cons

  • Limited GPU variety — the catalog is narrow; if you need something outside the main A100/H100 lineup, you may come up empty
  • Not enterprise-ready — no SOC 2 compliance, limited enterprise support tiers; not suitable for regulated industries
  • Ease of use is rough — the UI and onboarding experience lag behind more polished competitors like Lambda or CoreWeave
  • Beta status — some platform roughness should be expected; not a fit for production-critical workloads requiring SLA guarantees
  • Credit card only — no invoicing or purchase orders, which rules it out for many corporate procurement workflows

Getting started

  1. Visit Verda's site and create an account — you’ll need a credit card to activate
  2. Browse available instance types; filter by H100 or A100 depending on your workload
  3. Choose between on-demand (flexible), spot (cheapest, interruptible), or reserved (committed discount) pricing
  4. Launch with your preferred environment — pick a pre-built Jupyter or Docker image, or bring your own
  5. For recurring workloads, connect via the API to automate instance lifecycle management

Best for: Cost-focused ML practitioners and researchers who need flagship GPU access (H100/A100) at competitive rates and can tolerate a rougher platform experience in exchange for savings.

See something wrong? Report a data issue · DM on X