Local Hosting of LLMs and API Call Techniques in DevOps


πOverview
The project demonstrates how to use Generative AI to automate the creation of Dockerfiles for various programming languages.
π― Project Goal
Develop a Python script that generates Dockerfiles based on a specified programming language using Generative AI.
Why Automate Dockerfile Generation?
Manual Generation is Inefficient: Standard AI chatbots like ChatGPT can generate Dockerfiles, but the process isn't automated.
Access Restrictions: Some organizations restrict access to external AI tools.
Better Developer Experience: Using local or hosted LLMs streamlines Dockerfile creation.
π Implementation Approaches
Two main methods are used:
1. Local LLMs
Definition: LLMs running on local machines or organization servers.
Examples:
Meta's Llama
DeepSeek
IBM's Granite
Tooling:
- Ollama (similar to Docker for managing local LLMs).
Setup Steps:
- Install Ollama (
ollama pull llama3:2b
).
# For Linux
curl -fsSL https://ollama.com/install.sh | sh
- Start Ollama Service
ollama serve
- Pull Llama3 Model (I am using Llama3 Model feel free to choose any)
ollama pull llama3.2:1b
Set up a Python virtual environment.
Install dependencies (
pip install -r requirements.txt
)
ollama
- Write a Python script to:
import ollama
PROMPT = """
ONLY Generate an ideal Dockerfile for {language} with best practices. Do not provide any description
Include:
- Base image
- Installing dependencies
- Setting working directory
- Adding source code
- Running the application
"""
def generate_dockerfile(language):
response = ollama.chat(model='llama3.1:8b', messages=[{'role': 'user', 'content': PROMPT.format(language=language)}])
return response['message']['content']
if __name__ == '__main__':
language = input("Enter the programming language: ")
dockerfile = generate_dockerfile(language)
print("\nGenerated Dockerfile:\n")
print(dockerfile)
- Run the Application
python3 generate_dockerfile.py
π Example Usage
python3 generate_dockerfile.py
Enter programming language: python
# Generated Dockerfile will be displayed...
Pros: β Full control over data privacy and security.
Cons: β Requires infrastructure setup and maintenance (e.g., GPUs for large models). β Scaling to multiple users can be challenging.
2. Hosted LLMs
Definition: Cloud-based AI models accessible via APIs.
Examples:
OpenAI (ChatGPT API)
Google (Gemini API)
Setup Steps:
Install dependencies (
pip install -r requirements.txt
).Obtain an API key from the chosen provider.
Set up API authentication.
Write a Python script to:
import google.generativeai as genai
import os
# Set your API key here, API key should not be hardcoded
os.environ["GOOGLE_API_KEY"] = "xxxxxxxxxxxxxxxxxxxxxxxx"
# Configure the Gemini model
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
model = genai.GenerativeModel('gemini-1.5-pro')
PROMPT = """
Generate an ideal Dockerfile for {language} with best practices. Just share the dockerfile without any explanation between two lines to make copying dockerfile easy.
Include:
- Base image
- Installing dependencies
- Setting working directory
- Adding source code
- Running the application
"""
def generate_dockerfile(language):
response = model.generate_content(PROMPT.format(language=language))
return response.text
if __name__ == '__main__':
language = input("Enter the programming language: ")
dockerfile = generate_dockerfile(language)
print("\nGenerated Dockerfile:\n")
print(dockerfile)
Pros: β No infrastructure management required.
Cons: β Potential data security and privacy concerns. β Costs are based on API usage (tokens consumed). β Rate limits may apply.
π Key Takeaways
Prompt Engineering: Clearly defined prompts ensure high-quality Dockerfile generation.
Trade-offs: Local models offer security but require infrastructure, while hosted models provide ease of use but may have privacy and cost concerns.
API Calls: Both approaches involve interacting with LLMs through API requests.
Subscribe to my newsletter
Read articles from Spandan Mandal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
