laptop_mac macOS Sonoma
Intermediate
schedule 8 min read
by Alex Rivera • May 14, 2024
Don't want to deal with the terminal? LM Studio provides a native Linux AppImage that gives you a beautiful GUI for downloading models, tweaking GPU layers, and chatting with LLMs.
Introduction
LM Studio wraps the underlying llama.cpp engine in a sleek Electron-based user interface. It makes finding, downloading, and configuring GGUF models on Linux incredibly simple.
Prerequisites
LM Studio for Linux is distributed as an AppImage. For it to run correctly, you must have the FUSE (Filesystem in Userspace) library installed.
On Ubuntu 22.04 and later, run:
Terminal
sudo apt update
sudo apt install libfuse2
Ensure your NVIDIA drivers are also installed and working (nvidia-smi).
Step 1 Installation
- Open your browser and navigate to lmstudio.ai.
- Click Download for Linux to get the
.AppImage file.
- Open your terminal and navigate to your Downloads folder:
- Make the AppImage executable:
Terminal
chmod +x LM_Studio-*.AppImage
- Run the application:
(Tip: You can use a tool like AppImageLauncher to automatically integrate it into your desktop application menu).
Step 2 Enabling GPU Acceleration
To get maximum speed, ensure LM Studio detects your NVIDIA GPU.
- Open LM Studio.
- Go to the Settings tab (gear icon in the left sidebar).
- Scroll down to Hardware Settings.
- Check the box for GPU Offload and maximize the slider to
99 layers.
Step 3 Downloading Models
- Click the Magnifying Glass (Search) icon in the left sidebar.
- Search for models like
Mistral 7B or Llama 3 8B.
- LM Studio will display multiple quantization options. Choose
Q4_K_M or Q5_K_M for the best balance of speed and intelligence.
- Click Download.
Step 4 Local API Server
LM Studio can host a local API that perfectly mimics OpenAI's API format.
- Click the Local Server icon (
<->) in the left sidebar.
- Select your downloaded model from the dropdown.
- Click Start Server.
Your Linux machine is now hosting an API at http://localhost:1234/v1. You can point VS Code extensions (like Continue.dev) or custom Python scripts directly to this endpoint.