Select a model and quantization level to find which GPUs can run it. Results include both cloud rental prices and retail purchase options.
Local GPU Finder
Find which GPU you need to run any LLM model locally. Compare VRAM requirements, prices, and performance.