Meta
CodeLlama-13b-Instruct-hf specs, VRAM requirements, and which GPUs can run it.
CodeLlama-7b-Instruct-hf specs, VRAM requirements, and which GPUs can run it.
Llama 3.1 70B specs, VRAM requirements, and which GPUs can run it. The sweet spot for local reasoning.
Llama 3.1 8B specs, VRAM requirements, and which GPUs can run it. The go-to small model for local inference.
Llama-2-7b-hf specs, VRAM requirements, and which GPUs can run it.
Llama-3.1-405B-Instruct specs, VRAM requirements, and which GPUs can run it.
Llama-3.1-405B-Instruct-FP8 specs, VRAM requirements, and which GPUs can run it.
Llama-3.1-70B-Instruct specs, VRAM requirements, and which GPUs can run it.
Llama-3.2-1B specs, VRAM requirements, and which GPUs can run it.
Llama-3.2-3B specs, VRAM requirements, and which GPUs can run it.
Llama-Guard-3-8B specs, VRAM requirements, and which GPUs can run it.
Llama-Guard-3-8B-INT8 specs, VRAM requirements, and which GPUs can run it.
LlamaGuard-7b specs, VRAM requirements, and which GPUs can run it.
Meta-Llama-3-70B-Instruct specs, VRAM requirements, and which GPUs can run it.
Meta-Llama-3-8B specs, VRAM requirements, and which GPUs can run it.
Meta-Llama-3-8B-Instruct specs, VRAM requirements, and which GPUs can run it.
Meta-Llama-Guard-2-8B specs, VRAM requirements, and which GPUs can run it.