Home chevron_right Mac
MACOS GUIDES

Local AI for
Mac.

Apple Silicon makes your Mac one of the best local AI machines available. Pick your engine below and follow a step-by-step guide.

All Mac Tutorials

MAC 9 MIN READ

The 8GB Mac Survival Guide for Local AI

Can you run AI on an 8GB M1 or M2? Yes. Here are the best models and settings to avoid swap memory death.

Read Guide arrow_forward
MAC 11 MIN READ

Replace GitHub Copilot: Ollama + Continue.dev

Stop paying $10/month. Set up Ollama and the Continue.dev extension in VS Code on your Mac for completely free, private AI autocomplete.

Read Guide arrow_forward
MAC 8 MIN READ

LM Studio on Mac: The Easiest Offline AI Interface

Install LM Studio on macOS to get a beautiful GUI for downloading and running GGUF models with Metal acceleration.

Read Guide arrow_forward
MAC 7 MIN READ

System-wide Mac AI: Connect Ollama to Raycast

Integrate your local LLMs directly into Raycast. Highlight text anywhere on your Mac and hit a hotkey to summarize or rewrite it for free.

Read Guide arrow_forward
MAC 14 MIN READ

Llama.cpp on Mac: The Power User's Guide

Compile and run llama.cpp from scratch on macOS. Get maximum performance, zero bloat, and total control over your Metal acceleration parameters.

Read Guide arrow_forward
MAC 10 MIN READ

Apple's MLX Framework: Maximum AI Speed

How to use Apple's native MLX framework to run Llama 3 and Mistral at blistering speeds natively on Apple Silicon.

Read Guide arrow_forward
MAC 12 MIN READ

The Ultimate Guide: Run Ollama on Mac M3

The definitive masterclass to installing, optimizing, and running Ollama on Apple Silicon. Understand Unified Memory, model quantization, and how to maximize your M3 chip.

Read Guide arrow_forward