Building Le Petit Explorateur: My Journey Through the 2025 GenAI Bootcamp


As a DevOps engineer looking to expand my horizons, I signed up for Andrew Brown's Free GenAI Cloud Project Bootcamp with cautious optimism. Having spent years automating infrastructure and managing deployment pipelines, I wondered how my skills would transfer to the world of generative AI. Now, after an intensive 2 month journey from February to April 2025, I'm excited to share how this bootcamp fundamentally changed my technical trajectory.
The Starting Point: Language Learning Platform Challenge
The bootcamp assigned me to build an AI-powered language learning platform. Coming from a DevOps background, I initially felt out of my depth. I was used to supporting applications, not creating them from scratch. The assignment seemed daunting: create interactive games, implement AI tutors, and make everything work offline-first with robust fallbacks.
But as I soon discovered, my DevOps mindset of building resilient, fault-tolerant systems was exactly what GenAI development needed.
Week-by-Week: Building Skills Across the GenAI Spectrum
Pre-Week 1: Architecting for AI
Before writing a single line of code, we focused on architecture—familiar territory for a DevOps engineer. I designed a system that would become the foundation for all my future projects:
This architecture prioritized resilience—a concept I'd championed in DevOps for years, but now applied to the unpredictable world of AI.
Week 1: The Sentence Constructor Challenge
My first concrete project was building a French Sentence Constructor game. The assignment seemed simple: create a drag-and-drop interface for building French sentences. The reality was far more complex.
I discovered that different AI models had vastly different capabilities when it came to language instruction. I built a comparative analysis framework to evaluate responses from GPT, Claude, Gemini, LLaMA-3, Mistral, Grok, and Deepseek.
The results surprised me:
Model | Prompt Adherence | Teaching Approach | Overall Effectiveness |
Claude | High | Structured, professional | Excellent |
Gemini | High | Interactive, progressive | Very Good |
LLaMA-3 | High | Methodical, example-based | Very Good |
ChatGPT | Partial | Interactive, encouraging | Good but needed refinement |
Grok | High | Step-by-step guidance | Good |
Mistral | Poor | Generic response | Poor |
Deepseek | Poor | Oversimplified | Poor |
This analysis gave me a nuanced understanding of AI capabilities that would serve me throughout the bootcamp.
Week 2: Multi-Modalities - Beyond Text
Week 2 pushed me out of my comfort zone by introducing audio and visual elements. I built two applications:
Vocab Importer: A Streamlit application that uses Groq LLM to generate vocabulary lists with:
Words in target language
English translations
IPA pronunciation
Part of speech
Grammatical gender
- Writing Practice App: An application for practicing French handwriting with OCR feedback.
The writing practice app particularly challenged me—I had to work with:
Canvas manipulation in the browser
Tesseract OCR integration
Feedback systems for handwriting
This multi-modal experience showed me that AI wasn't just about text—it was about creating rich, interactive experiences.
Week 3: Listening Comprehension and AWS Integration
Week 3 took us deeper into audio processing with AWS services. I built a French Listening Comprehension application that:
Used Amazon Polly for text-to-speech
Extracted transcripts from YouTube videos
Integrated with Amazon Bedrock for chat functionality
Implemented question generation and structured data processing
Working with AWS services felt familiar from my DevOps background, but applying them to language learning was a creative challenge. I particularly enjoyed implementing RAG (Retrieval Augmented Generation) for context-aware responses—a technique I'd later use extensively.
Week 4: Containers & Agents with OPEA
Week 4 brought me back to familiar territory: containerization. But with a GenAI twist. We implemented the OPEA (Open Platform for Enterprise AI) architecture to build:
Text-to-Image Generator: A containerized service using Stable Diffusion
Image-to-Video Converter: A service using Stable Video Diffusion
My DevOps expertise shined here as I implemented:
Docker containerization with proper resource limits
Health monitoring endpoints
API gateways
Multi-level fallback strategies
I also built a French Learning VisualQnA service that used the OPEA MegaService architecture to:
Recognize objects in images
Teach French vocabulary related to those objects
Create interactive quizzes based on the images
Week 5: Agentic AI with Song-Vocab
Week 5 introduced me to agentic AI—where multiple specialized AI agents collaborate to solve problems. I built a Song-Vocab application that:
Searches and retrieves lyrics for French songs
Translates lyrics from French to English
Extracts key French vocabulary
Provides definitions and example sentences
This project followed an agent workflow where:
A Lyrics Agent retrieves the original French lyrics
A Translation Agent translates to English
A Vocabulary Agent extracts key terms with definitions
An Agent Manager orchestrates the entire workflow
The concept of specialized agents working together mirrored my DevOps experience with microservices, but with an AI twist.
Week 6 and beyond : Le Petit Explorateur - Bringing It All Together
My final project, Le Petit Explorateur, combined everything I'd learned into a comprehensive French learning platform with five core games:
Phrase Constructor: Building French sentences with drag-and-drop words
French Hangman: Guessing French words letter by letter
Quiz Challenge: Testing knowledge with timed quizzes
Daily Quick Learn: Short vocabulary lessons with streak tracking
AI Language Buddy: Conversational practice with an AI tutor
What made this project special wasn't just the features, but the engineering behind them. I implemented:
Adaptive Learning: Content that adjusts to the user's skill level
Personalized Content Generation: AI-created questions and examples
Smart Fallbacks: Graceful degradation when AI services fail
Offline Functionality: Browser caching and IndexedDB for offline use
The AI Language Buddy taught me the art of prompt engineering—moving from simple prompts to sophisticated instructions that guided the AI to provide appropriate language tutoring.
The DevOps-to-GenAI Connection: Key Insights
As I progressed through the bootcamp, I realized my DevOps background gave me unique advantages:
1. Error Handling Expertise
In DevOps, we design for failure. This mindset was invaluable for AI development, where services are inherently inconsistent. I implemented multi-level fallback strategies:
try {
// First try the backend API
const response = await api.get(`/ai/hangman-words`);
return response.data;
} catch (backendError) {
try {
// Direct OpenAI call as first fallback
const result = await openai.post('/chat/completions', {
model: "gpt-3.5-turbo",
messages: [...]
});
return JSON.parse(result.data.choices[0].message.content);
} catch (openaiError) {
// Final hardcoded fallback
return {
words: fallbackVocabulary[category] || fallbackVocabulary.animals
};
}
}
2. Cost-Aware Architecture
DevOps taught me to optimize resource usage. In GenAI, this translated to:
Caching responses to reduce API calls
Implementing tiered model selection (using GPT-3.5 for simple tasks, GPT-4 for complex ones)
Building offline functionality to reduce dependency on cloud services
3. User Experience Under Constraints
DevOps engineers understand system constraints. I applied this to managing AI latency:
Using skeleton screens instead of spinners
Prefetching likely content
Making waiting part of the experience
4. Monitoring and Observability
This implementation is pending but it would provide a comprehensive logging and monitoring view on:
Tracking model loading time
Measuring inference time
Monitoring memory usage
Calculating request success/failure rates
Beyond Technical: The Human Side of AI
Perhaps the most surprising lesson was about the human side of AI development. Building language learning tools forced me to think about:
Psychology of Learning: How streak tracking and rewards motivate users
Expectation Management: How to set appropriate expectations for AI capabilities
Progressive Disclosure: Gradually introducing complexity as users become more comfortable
What's Next
This bootcamp transformed me from a DevOps engineer with AI curiosity to a confident GenAI developer. I'm excited to continue developing Le Petit Explorateur with plans for:
Speech recognition for pronunciation practice
Spaced repetition algorithms for vocabulary
Progressive curriculum with achievement unlocks
Support for additional languages
Try It Yourself!
If you're interested in exploring my final project:
Demo Video: https://youtu.be/ORwg9qwdXXw
GitHub Repository: https://github.com/ambekadeshmukh/free-genai-bootcamp-2025
GenAI Bootcamp Videos : https://www.youtube.com/watch?v=R0z7xSuRK70&list=PLBfufR7vyJJ69c9MNlOKtO2w2KU5VzLJV
For all the DevOps engineers considering the leap into GenAI: your skills are more relevant than you might think. The discipline of building resilient systems transfers beautifully to the world of AI, where unpredictability is the only certainty.
Thank you to Andrew Brown [ ExamPro ] and all the incredible instructors in this bootcamp for this transformative experience!
Subscribe to my newsletter
Read articles from The DevOps Crossroads directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
