Northflank
Northflank is a cloud platform that has expanded into GPU compute, offering a range of NVIDIA hardware from entry-level T4s through to the latest H200 141GB. Currently in beta for its GPU offerings, the platform sits in an interesting position — it covers a solid spread of GPU tiers but is still building out the tooling that modern ML workflows expect.
The GPU catalog here is genuinely broad. You can start small with a T4 for inference work, step up through A100s for mid-range training, and reach all the way to H200 141GBs for the most memory-hungry workloads. That range is a real asset if you want to stay with one provider as your projects scale in complexity.
That said, the platform is early in its GPU journey. Pricing lands in a moderate range — not the cheapest on the market, but not premium either. Whether that represents good value depends heavily on the use case.
Why Northflank stands out
The headline differentiator is hardware breadth. Offering H200s alongside T4s under one roof means Northflank can theoretically serve everything from lightweight inference to frontier model fine-tuning. For teams who want optionality without juggling multiple cloud accounts, that’s worth noting.
Pros
- Wide GPU catalog, including H200 141GB — one of the largest memory configurations available anywhere
- Multiple A100 variants (40GB PCIe, 40GB, 80GB) give you flexibility in matching hardware to budget
- T4 and L4 options for cost-conscious inference deployments
- Platform is actively developing, so the feature set is likely to grow
Cons
- Currently in beta — expect rough edges and potential instability
- No Jupyter notebooks, Docker support, API access, or Kubernetes integration at this time
- No spot or reserved instance pricing, limiting cost optimization strategies
- No persistent storage support noted
- Ease-of-use and enterprise-readiness scores are currently low — not recommended for teams needing polished onboarding or compliance features
- No payment methods publicly listed, adding friction to getting started
Getting started
- Visit Northflank's website and create an account
- Navigate to the GPU compute section of the platform
- Select your hardware tier based on workload requirements — T4 or L4 for inference, A100/H100 for training
- Provision your instance and connect via the available access methods
- Monitor the platform’s changelog closely — given the beta status, new features are likely rolling out regularly
Best for: Developers already using Northflank as a deployment platform who want to experiment with GPU workloads without switching providers, or teams that need H200-tier hardware and are willing to trade ecosystem maturity for raw compute access.