Running AI and LLMs Locally with Ollama


Ever wanted to mess around with AI right on your own computer! no cloud nonsense or internet required? Ollama's your buddy here, it's a open-source tool that lets you run Large Language Models (LLMs) like Llama
, Gemma
, or Granite
locally. Super handy for keeping things private or when you're in full offline mode (aka airgapped). I'll keep this short and sweet, with some command-line goodies to get you going. We'll assume you're on Mac/Linux for simplicity, but it works on Windows too.
Quick Install
First things first, get Ollama set up. If you're online:
On Mac (with Homebrew):
~ brew install ollama
On Linux: Curl the install script from ollama.com or grab the binary.
~ curl -fsSL https://ollama.com/install.sh | sh
Windows: Download the exe from their site.
For airgapped setups: Download the installer on a connected machine, copy it over via USB or SCP, and run it on your offline box, no internet needed after that.
Pull Models
Language models are what make the magic happen. If connected:
Pull one of LLM with
~ ollama pull granite3.3:8b
Or whatever, like Google's gem
~ ollama pull gemma3 .
See the full list of language models.
For airgrapped, Do the pull on a online machine, then copy the whole ~/.ollama
folder (where models data live) to the same spot on your offline machine.
Firing It Up
Once installed and models are in place, run it and chat with AI:
~ ollama run granite3.3:8b
>>> Send a message (/? for help)
If it doesn’t start up, you may need to manually run the app in the background:
~ ollama serve
Finally
To show what models you’ve got loaded:
~ ollama list
I’ve got this running smooth on my M1 MacBook Pro with 32GB RAM → Granite3.3:8b is zipping along like a champ 🚀. Perfect for my offline AI sessions~ ✌🏻
Check out Ollama’s Github for details.
Subscribe to my newsletter
Read articles from Bruce L directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Bruce L
Bruce L
I’ve been rocking the DevOps journey for a decade, starting with building Cisco’s software-defined datacenters for multi-region OpenStack infrastructures. I then shifted to serverless and container deployments for finance institutions. Now, I’m deep into service meshes like Consul, automating with Ansible and Terraform, and running workloads on Kubernetes and Nomad. Stick around for some new tech and DevOps adventures!