What Training an AI Model Means — A Beginner-Friendly Guide

Table of contents
- What Does It Mean to "Train" an AI?
- Different Types of Models (aka, the Tools in the Toolbox)
- How AI Models Are Trained (and the Different Styles)
- Most Models Go Through These 3 Phases:
- Some Examples
- Steps to Train a Model
- 1. Prepare Your Data
- 2. Choose Your Model
- 3. Train It
- 4. Validate It
- 5. Test It
- 6. Deal with the Headaches
- How We Did It at Horus Labs (Without Training a Model from Scratch)
- Final Thoughts

Have you ever wondered how AI models like ChatGPT, Claude, or even that chatbot on your favorite shopping site work? I used to think they just stored information like a giant database, and when you ask a question, it retrieves an answer, like querying Google Docs.
But recently, while working on one of our products at Horus Labs (we call it Coloniz), we had to integrate an AI assistant, and I realized I was way off. If you’ve ever had that same idea or you’re just curious how AI models are trained, this post is for you.
Disclaimer: I’m not an AI expert , just someone who’s been exploring and learning along the way. So if you spot anything off, feel free to correct me.
What Does It Mean to "Train" an AI?
When we say “train” in real life, we usually mean helping someone (or even a pet) learn a skill. In the AI world, it’s kind of the same; we’re teaching the model to recognize patterns in data. Not facts, patterns.
For example, if you show it a bunch of emails labeled "spam" or "not spam," over time it starts figuring out what spam usually looks like.
Different Types of Models (aka, the Tools in the Toolbox)
Here are some of the model types I’ve come across. Each one is suited to a particular kind of problem:
Linear Regression: Predicts numbers (e.g., sales next month).
Logistic Regression: Binary outcomes (yes/no, fraud/not fraud).
Decision Trees: Breaks things into branches to make decisions.
Random Forests: Like decision trees, but many at once for better accuracy.
SVMs (Support Vector Machines): Great for sorting things into clear categories.
Neural Networks: These are the real MVPs behind things like ChatGPT. They mimic the human brain using layers of “neurons.”
How AI Models Are Trained (and the Different Styles)
Depending on your goal and the data you have, there are different ways to train a model:
Supervised Learning: Think of it like a teacher-student relationship. The model learns from labeled examples. (e.g., “This is a cat. This is not a cat.”)
Unsupervised Learning: No labels here. The model just looks for patterns on its own.
Semi-supervised: A mix of both: some labels and lots of unlabeled data.
Reinforcement Learning: The model learns through trial and error (like playing a game), getting rewarded for doing well.
Most Models Go Through These 3 Phases:
Especially the big ones, like GPT-style models. Here’s a breakdown:
Phase | Technique | What’s Going On |
Pretraining | Unsupervised / Self-supervised | The model soaks up knowledge from tons of raw data—there are no labels; it just learns language structure, facts, and concepts. |
Fine-tuning | Supervised Learning | Now we guide the model towards specific tasks (like customer support, coding, etc.) with labeled data. |
Advanced Fine-tuning | RLHF | This is where humans give feedback to make sure the model’s responses are safe, helpful, and aligned with what we want. |
Side note: You don’t have to go through all three phases, it really depends on what you’re building.
Some Examples
Supervised only: Spam filters, product recommendation engines.
Unsupervised only: Grouping users based on browsing patterns.
Reinforcement Learning: Training AI to play chess or drive a car in a simulation.
Semi-supervised: Diagnosing diseases with a few labeled X-rays + a bunch of unlabeled ones.
Steps to Train a Model
Here’s the basic 6-step workflow most people follow
1. Prepare Your Data
You start by collecting and cleaning your data. For us, it was a mix of community conversations, docs, and posts from Coloniz.
Sources could be:
Web scraping
In-house data
Public datasets
Sensor data
Even synthetic data (made up but useful)
2. Choose Your Model
This depends on what you want the AI to do. Answer questions? Predict values? Categorize things?
You also have to factor in:
Size of the data
How much compute power do you have
How transparent do you want the results to be
3. Train It
This is the main show. The model tries to predict something → we check how off it is → we adjust its internal weights → repeat. It’s like giving the model feedback after every guess.
⚠️ One trap to watch for: overfitting. That’s when the model gets too good at the training data but sucks at new stuff.
4. Validate It
We test it on data it hasn’t seen before, kind of like a mock exam. If performance tanks, we may need to go back, clean up the data, or simplify the model.
5. Test It
This is the real-world test. We check how it performs using metrics like:
Accuracy
Precision & recall
F1 score
AUC
6. Deal with the Headaches
Training can get messy. Here are some things we ran into:
Not enough quality data
Long training times
GPU limitations
Biased results
Getting hyperparameters right
How We Did It at Horus Labs (Without Training a Model from Scratch)
We didn’t build a giant model like GPT from the ground up (that would take millions). Instead, we built on GPT-4 and added something called a RAG system (Retrieval-Augmented Generation).
What it means: Instead of making GPT guess everything from memory, we give it fresh context, like documents from our community, so it gives smarter, more accurate answers.
Think of it like this:
“Hey GPT, before you answer, here’s what our users have been talking about.”
This method is:
Way more cost-effective
Easier to maintain
Super relevant to our product (Coloniz)
Final Thoughts
So yeah, training AI isn’t just about dumping data into a model. It’s a layered process with phases, techniques, and lots of feedback loops. I’ve come to appreciate how much thought goes into just one intelligent response.
If you’re curious or trying to do something similar for your product, I hope this helped demystify things a bit!
I'll also be writing follow-up posts on related topics like the RAG system, prompt engineering, and fine-tuning, so if you're into that kind of thing, keep an eye out.
Let me know your thoughts or questions.
Subscribe to my newsletter
Read articles from Adegbite Ademola Kelvin directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Adegbite Ademola Kelvin
Adegbite Ademola Kelvin
I'm a front-end Integration Engineer. I build websites with a focus on simplicity, responsiveness, accessibility, and pleasing aesthetics. I am currently learning the fundamental of blockchain and Ethereum with the aim of transitioning into the web3 space.