<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>nodepedia</title><link>https://nodepedia.com/tags/apple-silicon/</link><description>Compare GPU cloud pricing across providers. Daily-updated spot and on-demand prices for H100, A100, RTX 4090, and more. Free tools and guides.</description><language>en-us</language><lastBuildDate>Sun, 19 Apr 2026 08:36:51 +0000</lastBuildDate><atom:link href="https://nodepedia.com/tags/apple-silicon/index.xml" rel="self" type="application/rss+xml"/><item><title>Apple Mac Mini M4 (16GB)</title><link>https://nodepedia.com/gpu/apple-m4-16gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-16gb/</guid><description>Mac Mini M4 with 16GB unified memory — the most affordable entry point for local AI inference on Apple Silicon.</description></item><item><title>Apple Mac Mini M4 (24GB)</title><link>https://nodepedia.com/gpu/apple-m4-24gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-24gb/</guid><description>Mac Mini M4 with 24GB unified memory — run 14B parameter models locally at Q4 quantization.</description></item><item><title>Apple Mac Mini M4 (32GB)</title><link>https://nodepedia.com/gpu/apple-m4-32gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-32gb/</guid><description>Mac Mini M4 with 32GB unified memory — the sweet spot for running 20B+ parameter models on the base M4 chip.</description></item><item><title>Apple Mac Mini M4 Pro (24GB)</title><link>https://nodepedia.com/gpu/apple-m4-pro-24gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-pro-24gb/</guid><description>Mac Mini M4 Pro with 24GB unified memory — 16-core GPU with 273 GB/s bandwidth for faster local inference.</description></item><item><title>Apple Mac Mini M4 Pro (48GB)</title><link>https://nodepedia.com/gpu/apple-m4-pro-48gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-pro-48gb/</guid><description>Mac Mini M4 Pro with 48GB unified memory — a compact local inference powerhouse. Run Llama 3.1 70B Q4 locally.</description></item><item><title>Apple Mac Mini M4 Pro (64GB)</title><link>https://nodepedia.com/gpu/apple-m4-pro-64gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-pro-64gb/</guid><description>Mac Mini M4 Pro with 64GB unified memory — run 45B+ parameter models locally with 273 GB/s bandwidth.</description></item><item><title>Apple Mac Studio M3 Ultra (256GB)</title><link>https://nodepedia.com/gpu/apple-m3-ultra-256gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m3-ultra-256gb/</guid><description>Mac Studio M3 Ultra with 256GB unified memory — the highest-capacity Apple Silicon machine for running 180B+ parameter models locally.</description></item><item><title>Apple Mac Studio M3 Ultra (96GB)</title><link>https://nodepedia.com/gpu/apple-m3-ultra-96gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m3-ultra-96gb/</guid><description>Mac Studio M3 Ultra with 96GB unified memory — 60-core GPU with 819 GB/s bandwidth for high-throughput local inference.</description></item><item><title>Apple Mac Studio M4 Max (128GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-studio-128gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-studio-128gb/</guid><description>Mac Studio M4 Max with 128GB unified memory and 40-core GPU — run 90B+ parameter models at 546 GB/s bandwidth.</description></item><item><title>Apple Mac Studio M4 Max (36GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-36gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-36gb/</guid><description>Mac Studio M4 Max with 36GB unified memory — 30-core GPU with 410 GB/s bandwidth for high-speed local inference.</description></item><item><title>Apple Mac Studio M4 Max (48GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-48gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-48gb/</guid><description>Mac Studio M4 Max with 48GB unified memory — run 33B parameter models at high speed with 410 GB/s bandwidth.</description></item><item><title>Apple Mac Studio M4 Max (64GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-64gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-64gb/</guid><description>Mac Studio M4 Max with 64GB unified memory — run 45B+ parameter models locally with 410 GB/s bandwidth.</description></item><item><title>Apple MacBook Pro M4 Max (128GB)</title><link>https://nodepedia.com/gpu/apple-m4-max-128gb/</link><pubDate>Sun, 19 Apr 2026 05:26:21 +0000</pubDate><guid>https://nodepedia.com/gpu/apple-m4-max-128gb/</guid><description>MacBook Pro M4 Max with 128GB unified memory — run 70B+ models at full precision on a laptop.</description></item></channel></rss>