<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>nodepedia</title><link>https://nodepedia.com/tags/inference/</link><description>Compare GPU cloud pricing across providers. Daily-updated spot and on-demand prices for H100, A100, RTX 4090, and more. Free tools and guides.</description><language>en-us</language><lastBuildDate>Sun, 19 Apr 2026 08:36:51 +0000</lastBuildDate><atom:link href="https://nodepedia.com/tags/inference/index.xml" rel="self" type="application/rss+xml"/><item><title>RunPod</title><link>https://nodepedia.com/providers/runpod-io/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/providers/runpod-io/</guid><description>GPU cloud for AI inference and training with serverless endpoints.</description></item><item><title>AMD Instinct MI210</title><link>https://nodepedia.com/gpu/amd-mi210/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/amd-mi210/</guid><description>Where to rent AMD Instinct MI210 GPUs. Compare pricing across cloud providers.</description></item><item><title>AMD Instinct MI250X</title><link>https://nodepedia.com/gpu/amd-instinct-mi250-300/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/amd-instinct-mi250-300/</guid><description>Where to rent AMD Instinct MI250X GPUs. Compare pricing across cloud providers.</description></item><item><title>AMD Radeon AI PRO R9700</title><link>https://nodepedia.com/gpu/amd-radeon-ai-pro-r9700/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/amd-radeon-ai-pro-r9700/</guid><description>Where to rent AMD Radeon AI PRO R9700 GPUs. Compare pricing across cloud providers.</description></item><item><title>AMD Radeon Pro W6800</title><link>https://nodepedia.com/gpu/amd-radeon-pro-w6800/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/amd-radeon-pro-w6800/</guid><description>Where to rent AMD Radeon Pro W6800 GPUs. Compare pricing across cloud providers.</description></item><item><title>AMD Radeon Pro W7900</title><link>https://nodepedia.com/gpu/amd-radeon-pro-w7900/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/amd-radeon-pro-w7900/</guid><description>Where to rent AMD Radeon Pro W7900 GPUs. Compare pricing across cloud providers.</description></item><item><title>AMD Radeon RX 6800</title><link>https://nodepedia.com/gpu/amd-radeon-rx-6800/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/amd-radeon-rx-6800/</guid><description>Where to rent AMD Radeon RX 6800 GPUs. Compare pricing across cloud providers.</description></item><item><title>AMD Radeon RX 7900 XTX</title><link>https://nodepedia.com/gpu/amd-radeon-rx-7900-xtx/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/amd-radeon-rx-7900-xtx/</guid><description>Where to rent AMD Radeon RX 7900 XTX GPUs. Compare pricing across cloud providers.</description></item><item><title>Apple Mac Mini M4 (16GB)</title><link>https://nodepedia.com/gpu/apple-m4-16gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-16gb/</guid><description>Mac Mini M4 with 16GB unified memory — the most affordable entry point for local AI inference on Apple Silicon.</description></item><item><title>Apple Mac Mini M4 (24GB)</title><link>https://nodepedia.com/gpu/apple-m4-24gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-24gb/</guid><description>Mac Mini M4 with 24GB unified memory — run 14B parameter models locally at Q4 quantization.</description></item><item><title>Apple Mac Mini M4 (32GB)</title><link>https://nodepedia.com/gpu/apple-m4-32gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-32gb/</guid><description>Mac Mini M4 with 32GB unified memory — the sweet spot for running 20B+ parameter models on the base M4 chip.</description></item><item><title>Apple Mac Mini M4 Pro (24GB)</title><link>https://nodepedia.com/gpu/apple-m4-pro-24gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-pro-24gb/</guid><description>Mac Mini M4 Pro with 24GB unified memory — 16-core GPU with 273 GB/s bandwidth for faster local inference.</description></item><item><title>Apple Mac Mini M4 Pro (48GB)</title><link>https://nodepedia.com/gpu/apple-m4-pro-48gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-pro-48gb/</guid><description>Mac Mini M4 Pro with 48GB unified memory — a compact local inference powerhouse. Run Llama 3.1 70B Q4 locally.</description></item><item><title>Apple Mac Mini M4 Pro (64GB)</title><link>https://nodepedia.com/gpu/apple-m4-pro-64gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-pro-64gb/</guid><description>Mac Mini M4 Pro with 64GB unified memory — run 45B+ parameter models locally with 273 GB/s bandwidth.</description></item><item><title>Apple Mac Studio M3 Ultra (256GB)</title><link>https://nodepedia.com/gpu/apple-m3-ultra-256gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m3-ultra-256gb/</guid><description>Mac Studio M3 Ultra with 256GB unified memory — the highest-capacity Apple Silicon machine for running 180B+ parameter models locally.</description></item><item><title>Apple Mac Studio M3 Ultra (96GB)</title><link>https://nodepedia.com/gpu/apple-m3-ultra-96gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m3-ultra-96gb/</guid><description>Mac Studio M3 Ultra with 96GB unified memory — 60-core GPU with 819 GB/s bandwidth for high-throughput local inference.</description></item><item><title>Apple Mac Studio M4 Max (128GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-studio-128gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-studio-128gb/</guid><description>Mac Studio M4 Max with 128GB unified memory and 40-core GPU — run 90B+ parameter models at 546 GB/s bandwidth.</description></item><item><title>Apple Mac Studio M4 Max (36GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-36gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-36gb/</guid><description>Mac Studio M4 Max with 36GB unified memory — 30-core GPU with 410 GB/s bandwidth for high-speed local inference.</description></item><item><title>Apple Mac Studio M4 Max (48GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-48gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-48gb/</guid><description>Mac Studio M4 Max with 48GB unified memory — run 33B parameter models at high speed with 410 GB/s bandwidth.</description></item><item><title>Apple Mac Studio M4 Max (64GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-64gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-64gb/</guid><description>Mac Studio M4 Max with 64GB unified memory — run 45B+ parameter models locally with 410 GB/s bandwidth.</description></item><item><title>Apple MacBook Pro M4 Max (128GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-128gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-128gb/</guid><description>MacBook Pro M4 Max with 128GB unified memory — run 70B+ models at full precision on a laptop.</description></item><item><title>NVIDIA A100 40GB</title><link>https://nodepedia.com/gpu/nvidia-a100-40gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/nvidia-a100-40gb/</guid><description>Where to rent NVIDIA A100 40GB GPUs. Compare pricing across cloud providers.</description></item><item><title>NVIDIA A100 80GB</title><link>https://nodepedia.com/gpu/nvidia-a100-80gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/nvidia-a100-80gb/</guid><description>Where to rent NVIDIA A100 80GB GPUs. Compare pricing across cloud providers.</description></item><item><title>NVIDIA A100 PCIe 80GB</title><link>https://nodepedia.com/gpu/nvidia-a100-pcie/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/nvidia-a100-pcie/</guid><description>Where to rent NVIDIA A100 PCIe 80GB GPUs. Compare pricing across cloud providers.</description></item><item><title>NVIDIA A30</title><link>https://nodepedia.com/gpu/nvidia-a30/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/nvidia-a30/</guid><description>Where to rent NVIDIA A30 GPUs. Compare pricing across cloud providers.</description></item><item><title>NVIDIA A40</title><link>https://nodepedia.com/gpu/nvidia-a40/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/nvidia-a40/</guid><description>Where to rent NVIDIA A40 GPUs. Compare pricing across cloud providers.</description></item><item><title>NVIDIA A800 PCIe 80GB</title><link>https://nodepedia.com/gpu/nvidia-a800-pcie/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/nvidia-a800-pcie/</guid><description>Where to rent NVIDIA A800 PCIe 80GB GPUs. Compare pricing across cloud providers.</description></item><item><title>NVIDIA Blackwell B200</title><link>https://nodepedia.com/gpu/nvidia-b200/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/nvidia-b200/</guid><description>Where to rent NVIDIA Blackwell B200 GPUs. Compare pricing across cloud providers.</description></item><item><title>NVIDIA CMP 3045 HX</title><link>https://nodepedia.com/gpu/nvidia-cmp-hx/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/nvidia-cmp-hx/</guid><description>Where to rent NVIDIA CMP 3045 HX GPUs. Compare pricing across cloud providers.</description></item><item><title>NVIDIA CMP 70HX</title><link>https://nodepedia.com/gpu/nvidia-cmp-70hx/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/nvidia-cmp-70hx/</guid><description>Where to rent NVIDIA CMP 70HX GPUs. Compare pricing across cloud providers.</description></item></channel></rss>