Setting Up a Local LLM Chat Server with Ollama and OpenWebUI on Alpine Linux


Running your own local Language Learning Model (LLM) server can be an excellent way to repurpose older hardware while maintaining control over your AI interactions. This guide will walk you through setting up Ollama and OpenWebUI on Alpine Linux, a lightweight distribution perfect for breathing new life into older machines.
Note: For the initial Alpine Linux installation and SSH setup, you'll need physical access to your computer. If you're using a desktop, connect it to a keyboard and monitor. For laptops, you can use the built-in screen and keyboard. Once SSH is configured, you can continue the rest of the setup remotely.
Why Alpine Linux?
Alpine Linux is an excellent choice for this project due to its minimal resource footprint and security-focused design. It uses musl libc and busybox utilities, resulting in smaller binary sizes and reduced memory usage compared to traditional GNU/Linux distributions. The distribution's simplicity means fewer background processes, leaving more resources available for running your LLM. Additionally, Alpine's package manager (apk) is incredibly fast, and the distribution's read-only root filesystem design adds an extra layer of security. These characteristics make it ideal for running server applications like Ollama on older hardware.
Why Docker?
Docker eliminates the complex, time-consuming process of manual setup by packaging everything you need into containers. Instead of wrestling with dependencies, compatibility issues, and system-specific configurations, you can simply pull a pre-configured container and start it running. Think of it as getting a fully furnished room rather than buying and assembling each piece of furniture yourself. This approach is especially valuable for LLM services, where the traditional setup process can be particularly challenging.
Initial System Setup
For this installation, we'll be using root access throughout the process. This is necessary for system-level operations like enabling repositories and installing Docker. First, boot into Alpine Linux and run the installation wizard to install the system on your SSD or hard drive. By default, Alpine runs from USB, but we want a permanent installation:
setup-alpine
Follow the prompts to configure your system. Make sure to select your SSD or hard drive when asked about the installation destination.
Configuring SSH Access
To manage your server remotely, we'll need to enable root SSH access. Open the SSH configuration file:
vi /etc/ssh/sshd_config
Find the line containing #PermitRootLogin prohibit-password
and change it to:
PermitRootLogin yes
After saving the changes, restart the SSH service to apply the new configuration:
service sshd restart
Setting Up Docker
1. Enable the Community Repository
Edit the repository configuration file:
vi /etc/apk/repositories
Locate the line ending in /community
and remove the #
comment character at the beginning. The file should look similar to this:
#/media/cdrom/apks
http://ftp.halifax.rwth-aachen.de/alpine/v3.13/main
http://ftp.halifax.rwth-aachen.de/alpine/v3.13/community # <-- uncomment this
#http://ftp.halifax.rwth-aachen.de/alpine/edge/main
#http://ftp.halifax.rwth-aachen.de/alpine/edge/community
#http://ftp.halifax.rwth-aachen.de/alpine/edge/testing
2. Install and Configure Docker
Update the package index and install Docker:
apk update
apk add docker
rc-update add docker default # Ensures Docker starts on boot
/etc/init.d/docker start
Installing Ollama and OpenWebUI
Now we'll set up the LLM server components, Ollama and OpenWebUI
# Pull and run Ollama
docker pull ollama/ollama
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# Download and run a model (in this case, Llama 3)
docker exec -it ollama ollama run llama3
# Install OpenWebUI
docker run -d -p 3000:8080 \
--add-host=host.docker.internal:host-gateway \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
Accessing Your LLM Server
Once everything is set up, you can access the OpenWebUI interface by opening a web browser and navigating to:
http://192.168.x.y:3000
Replace 192.168.x.y
with your server's local IP address. You can find your IP address by running ip addr
on the server.
Troubleshooting Tips
If Docker fails to start, check the system logs using
docker logs ollama
ordocker logs open-webui
Ensure ports 11434 and 3000 are not being used by other services
If the web interface is inaccessible, verify your firewall settings with
rc-service iptables status
Memory issues can be checked using
free -h
to monitor system resources
Your local LLM chat server is now ready to use! You can start chatting with the AI model through the web interface while maintaining complete control over your data and infrastructure.
Subscribe to my newsletter
Read articles from Anirudha Bhurke directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Anirudha Bhurke
Anirudha Bhurke
๐ Software Engineer | Problem-Solving Enthusiast ๐ก | Deep Thinker ๐ง | Passionate Learner ๐ | Turning code into solutions, one algorithm at a time. #Techie