AI-Driven Dockerfile Generator: Leverage Ollama and Gemini's LLMs

Introduction
In this tutorial, we'll explore how to leverage Large Language Models (LLMs) โ both locally and via the cloud โ to generate Dockerfiles intelligently using Python. This hybrid project integrates:
Meta's LLaMA 3.2 via Ollama
Google Gemini 2.0 Flash API via Google AI Studio
๐ Project Structure
```
llm-gen-ai-project/
โโโ local-llm-ollama/
โ โโโ dockerfile_generator.py
โ โโโ requirements.txt
โโโ hosted-llm-ollama/
โ โโโ dockerfile_generator_gemini.py
โ โโโ requirements.txt
โโโ README.md
```
๐งฉ Project 1: Local LLM with Ollama
โ What is Ollama?
Ollama is like Docker but for local LLMs. It gives you a CLI and a model registry (like DockerHub) where you can pull and run LLMs on your machine.
๐ง Setup Instructions
# Install Ollama CLI
curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama service
ollama serve
# Pull and run the LLaMA model
ollama pull llama3.2:1b
ollama run llama3.2:1b
# Set Up Python Environment & Trigger the Script
bash
Copy
Edit
sudo apt install python3
sudo apt install python3.12-venv
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python3 dockerfile_generator.py
deactivate
๐งฉ Project 2: Hosted LLM with Google Gemini 2.0 Flash
๐ What is Gemini?
Google Gemini 2.0 Flash is a powerful, cloud-hosted LLM with token-based pricing. You can use Google AI Studio to generate a free API key and integrate it into your Python app.
๐ ๏ธ Setup Steps
python3 -m venv gemini_venv
source gemini_venv/bin/activate
pip install -r requirements.txt
export GOOGLE_API_KEY="your-key-here"
pip install --upgrade google-generativeai
python3 dockerfile_generator_gemini.py
deactivate #when your task is finished.
๐ฆ Features
๐ก Dynamic Dockerfile generation
โ Offline & online LLM integration
๐ง Natural language prompts inside Python
๐ Secure API usage
๐ Clean code structure and virtual environments
๐ SEO Keywords
AI Dockerfile generator
, Ollama LLM tutorial
, Google Gemini Python integration
, Dockerfile automation with LLM
, Meta LLaMA 3.2 Ollama
, Google AI Studio API Key
, offline AI tools
, local LLM CLI
, DevOps with AI
, Python AI automation
๐ Conclusion
Whether you're working on cloud-native projects or developing offline tools, this project shows how LLMs can automate boilerplate tasks like writing Dockerfiles โ intelligently and efficiently.
Subscribe to my newsletter
Read articles from Aditya Khadanga directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Aditya Khadanga
Aditya Khadanga
A DevOps practitioner dedicated to sharing practical knowledge. Expect in-depth tutorials and clear explanations of DevOps concepts, from fundamentals to advanced techniques. Join me on this journey of continuous learning and improvement!