Step 1 · Pick Your Tool
START HERE: Choose Your Engine
The best ways to run AI locally on Linux.
All Linux Tutorials
LINUX
7 MIN READ
Run Ollama on Linux: The Definitive Guide
Deploy Ollama as a background systemd service on Ubuntu/Debian. Full setup for NVIDIA CUDA and AMD ROCm.
Read Guide arrow_forward
LINUX
6 MIN READ
Setup LM Studio on Linux (Ubuntu/Debian)
Install the LM Studio AppImage on Linux to get a beautiful graphical interface for your local AI models.
Read Guide arrow_forward
LINUX
15 MIN READ
Local Llama 3 on Linux
Deploy Meta's Llama 3 model locally on Linux using llama.cpp with full CUDA support. This guide covers compilation, quantization, and running the model from the command line.
Read Guide arrow_forward
LINUX
12 MIN READ
High-Throughput Serving with vLLM on Ubuntu
For enterprise-grade performance, deploy vLLM on Linux to serve models with PagedAttention and maximum token throughput.
Read Guide arrow_forward