N

Northflank

Active

Developer platform for containerized apps and GPU compute at scale

northflank.com · Founded 2020 · London, UK · Verified: 2026-03-24
5.5
Overall
8
Ease of Use
1
Pricing
8
GPU Variety
5
Enterprise

GPU Pricing

GPU ModelVRAMSpot $/hrOn-demand $/hrTrendAvailable
NVIDIA L4 24GB24GB$0.67 In Stock
NVIDIA A100 80GB80GB$1.42 In Stock
NVIDIA H100 80GB80GB$3.74 In Stock
NVIDIA RTX 400020GB$1.42 In Stock
NVIDIA A400016GB$1.76 In Stock
NVIDIA A6000 48GB48GB$1.76 In Stock
NVIDIA RTX 408016GB$1.42 In Stock
NVIDIA RTX 309024GB$1.42 In Stock
NVIDIA RTX 4090 24GB24GB$1.7 In Stock
NVIDIA L40S 48GB48GB$2.74 In Stock
NVIDIA A40 48GB48GB$1.43 In Stock
NVIDIA H200 141GB141GB$3.74 In Stock
NVIDIA A100 40GB40GB$1.42 In Stock

Features

Api
Docker
Jupyter
Kubernetes
Multi Gpu
Persistent Storage
Reserved Instances
Soc2 Compliant
Spot Instances

Billing & Payment

Billing Granularity

Per-Second

Payment Methods

Credit-Card, Invoice

Northflank

Northflank is a cloud platform that has expanded into GPU compute, offering a range of NVIDIA hardware from entry-level T4s through to the latest H200 141GB. Currently in beta for its GPU offerings, the platform sits in an interesting position — it covers a solid spread of GPU tiers but is still building out the tooling that modern ML workflows expect.

The GPU catalog here is genuinely broad. You can start small with a T4 for inference work, step up through A100s for mid-range training, and reach all the way to H200 141GBs for the most memory-hungry workloads. That range is a real asset if you want to stay with one provider as your projects scale in complexity.

That said, the platform is early in its GPU journey. Pricing lands in a moderate range — not the cheapest on the market, but not premium either. Whether that represents good value depends heavily on the use case.

Why Northflank stands out

The headline differentiator is hardware breadth. Offering H200s alongside T4s under one roof means Northflank can theoretically serve everything from lightweight inference to frontier model fine-tuning. For teams who want optionality without juggling multiple cloud accounts, that’s worth noting.

Pros

  • Wide GPU catalog, including H200 141GB — one of the largest memory configurations available anywhere
  • Multiple A100 variants (40GB PCIe, 40GB, 80GB) give you flexibility in matching hardware to budget
  • T4 and L4 options for cost-conscious inference deployments
  • Platform is actively developing, so the feature set is likely to grow

Cons

  • Currently in beta — expect rough edges and potential instability
  • No Jupyter notebooks, Docker support, API access, or Kubernetes integration at this time
  • No spot or reserved instance pricing, limiting cost optimization strategies
  • No persistent storage support noted
  • Ease-of-use and enterprise-readiness scores are currently low — not recommended for teams needing polished onboarding or compliance features
  • No payment methods publicly listed, adding friction to getting started

Getting started

  1. Visit Northflank's website and create an account
  2. Navigate to the GPU compute section of the platform
  3. Select your hardware tier based on workload requirements — T4 or L4 for inference, A100/H100 for training
  4. Provision your instance and connect via the available access methods
  5. Monitor the platform’s changelog closely — given the beta status, new features are likely rolling out regularly

Best for: Developers already using Northflank as a deployment platform who want to experiment with GPU workloads without switching providers, or teams that need H200-tier hardware and are willing to trade ecosystem maturity for raw compute access.

See something wrong? Report a data issue · DM on X