AI Setup On Linux
Follow our guide to install AI models on Linux efficiently
This comprehensive tutorial will walk you through everything needed to install and run the Stable Diffusion image-generation model on Ubuntu/Debian. You’ll prepare your system, set up CUDA, create isolated Python environments, install all dependencies, download model weights, launch the web UI, and troubleshoot common issues—all with annotated screenshots and properly formatted code blocks.
1. System Prerequisites & GPU Setup
- Ubuntu 20.04+ or Debian 11+
- NVIDIA GPU with CUDA capability (compute > 6.0)
- 8+ GB RAM (16 GB recommended)
- Internet access to download packages and model weights
2. Install NVIDIA Driver, CUDA & cuDNN
Add NVIDIA repository and install driver and CUDA toolkit:
sudo apt update
sudo apt install -y gnupg curl
curl -s -L https://nvidia.github.io/nvidia-apt-key.gpg | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list \
| sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update
sudo apt install -y nvidia-driver-515 cuda-toolkit-11-7 libcudnn8 libcudnn8-dev
After installation, reboot and verify:
nvidia-smi
3. Update & Install Base Packages
sudo apt update && sudo apt upgrade -y
sudo apt install -y git wget unzip python3.10 python3.10-venv python3.10-dev build-essential cmake
4. Create Project Directory & Virtual Env
mkdir -p ~/stable-diffusion && cd ~/stable-diffusion
python3.10 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip setuptools wheel
5. Clone Web UI & Install Python Dependencies
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git .
pip install -r requirements.txt
This installs PyTorch (with CUDA support), diffusers, transformers, and other libraries.
6. Download & Place Model Weights
mkdir -p models/Stable-diffusion
cd models/Stable-diffusion
wget -O sd-v1-4.ckpt https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
cd ~/stable-diffusion
7. Launch Web UI with GPU
source .venv/bin/activate
python webui.py --precision full --no-half --enable-insecure-extension-access
Access the UI at http://localhost:7860
. The extra flag allows loading third-party extensions.
8. Optional: Install Extensions & Models
# In Web UI > Extensions tab, install via Git URL
# Or manually:
cd extensions
git clone https://github.com/someuser/sd-webui-extension.git
To add new models or extensions, simply clone into the extensions folder.
9. Troubleshooting
-
If PyTorch doesn’t detect GPU: ensure CUDA paths in
~/.bashrc
:export PATH=/usr/local/cuda-11.7/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-11.7/lib64:$LD_LIBRARY_PATH
-
If webui.py fails on import: run
pip install --upgrade -r requirements.txt
10. Performance Tuning
To speed up output and reduce memory consumption:
python webui.py --precision full --no-half --opt-sdp-attention
export TORCHTRT_HIGHER_PRECISION_DISABLED=1
11. Automatic Updates
Set up a cron to automatically update once a week:
crontab -e
0 3 * * 0 cd ~/stable-diffusion && git pull && source .venv/bin/activate && pip install -r requirements.txt
12. Security & Permissions
To restrict access to the UI, you can run via SSH tunnel or with basic authentication:
python webui.py --listen localhost
{
"basic_auth_username": "user",
"basic_auth_password": "pass"
}
13. Cleanup & Disk Management
Remove old checkpoints and compress logs:
find ~/stable-diffusion/models -type f -mtime +30 -delete
gzip ~/stable-diffusion/logs/*.log
14. Verify Python & PyTorch Installation
python --version
python -c "import torch; print(torch.cuda.is_available())"
15. Run a GPU Stress Test
python -m torch.cuda.benchmarks
16. Optimize Disk I/O
sudo apt install -y nvme-cli
sudo nvme format /dev/nvme0n1 --filesystem host
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf
17. Automated Startup Script
[Unit]
Description=Stable Diffusion WebUI
After=network.target
[Service]
User=$USER
WorkingDirectory=/home/$USER/stable-diffusion
ExecStart=/home/$USER/stable-diffusion/.venv/bin/python webui.py --precision full --no-half
Restart=on-failure
[Install]
WantedBy=multi-user.target
Conclusion
Now you have a complete, debugged and automated AI-setup on Linux: from basic dependencies to optimization, security and automatic startup.