Step 1 · Pick Your Tool
START HERE: Choose Your Engine
The best ways to run AI locally on Mac.
All Mac Tutorials
The 8GB Mac Survival Guide for Local AI
Can you run AI on an 8GB M1 or M2? Yes. Here are the best models and settings to avoid swap memory death.
Read Guide arrow_forwardReplace GitHub Copilot: Ollama + Continue.dev
Stop paying $10/month. Set up Ollama and the Continue.dev extension in VS Code on your Mac for completely free, private AI autocomplete.
Read Guide arrow_forwardLM Studio on Mac: The Easiest Offline AI Interface
Install LM Studio on macOS to get a beautiful GUI for downloading and running GGUF models with Metal acceleration.
Read Guide arrow_forwardSystem-wide Mac AI: Connect Ollama to Raycast
Integrate your local LLMs directly into Raycast. Highlight text anywhere on your Mac and hit a hotkey to summarize or rewrite it for free.
Read Guide arrow_forwardLlama.cpp on Mac: The Power User's Guide
Compile and run llama.cpp from scratch on macOS. Get maximum performance, zero bloat, and total control over your Metal acceleration parameters.
Read Guide arrow_forwardApple's MLX Framework: Maximum AI Speed
How to use Apple's native MLX framework to run Llama 3 and Mistral at blistering speeds natively on Apple Silicon.
Read Guide arrow_forwardThe Ultimate Guide: Run Ollama on Mac M3
The definitive masterclass to installing, optimizing, and running Ollama on Apple Silicon. Understand Unified Memory, model quantization, and how to maximize your M3 chip.
Read Guide arrow_forward