📘 Day 2 – Understanding Prediction & Simple Classification


(from the book Make Your Own Neural Network)
Today I continued reading Make Your Own Neural Network by Tariq Rashid — and things are slowly starting to make sense. This part focused more on how machines “predict” and how they can be trained to classify things, using very simple math.
Here’s what I understood from today’s read:
⚙️ Computers Are Just Big Calculators
It first said that computers are basically calculators. They perform simple arithmetic operations and give output — and they’re super fast at it.
Even when we’re watching TV or streaming music, it’s not complicated logic — it’s just a ton of basic arithmetic. That’s not AI, that’s just a fast calculator.
🧍♂️ But They Can’t See Like Us
Where computers fall short is in things like recognizing images.
We, as humans, are way better at looking at pictures and telling the difference between a cat, a human, or a tree.
Computers can do millions of calculations instantly — but when it comes to seeing and understanding visuals, they often fail.
Humans can’t do a million calculations per second, but we’re insanely good at spotting details in images.
🔁 Prediction Example — Converting KM to Miles
Then the book introduced a simple predicting machine using the kilometers-to-miles example.
Just like us, neural networks take input, analyze it, and give output.
We already know km and miles are both distance units and have a linear relationship. That means if you double km, miles also doubles. So the formula is:
iniCopyEditmiles = kilometers × c
Let’s say kilometers = 100. If we assume c = 0.5, we get 50 miles — which is way off (the actual value should be 62.137), so we get an error of 12.137.
Then we try again — c = 0.6 gives 60 miles — much closer.
c = 0.7 gives 70 — which overshoots the target.
So what we’re doing here is adjusting the parameter c
to get a better output.
Here, c
is the parameter we can tune — the machine is “learning” through trial and error.
🐛 Bug Classifier — Intro to Classification
Now moving on to classification.
The book used an example of bugs — ladybugs and caterpillars — plotted on a graph.
We can put a line in the middle of the graph to help us separate the two types visually.
This is our classifier.
And just like in the km-to-miles example, we can train this classifier too.
We use the equation:
iniCopyEdity = ax
Here, a
is the parameter.
By changing a
, we can rotate the line until it separates the two types of bugs properly.
If the line looks like the second picture in the book (well-separated), it means it’s optimized.
Even if there's noise or weird cases, a good classifier should still work — and that’s exactly what happens in real-world prediction models.
⚖️ Balancing Learning
The key idea from this section is:
We can use simple math — the difference between the classifier’s output and the actual value — to adjust the parameters and improve.
One issue though: if a new training example gives a really good output but overwrites previous learning, that’s bad.
We don’t want one piece of learning to dominate and erase everything else — so we update carefully.
That’s everything I read today. I'm stopping right before the section called "Sometimes one classifier is not enough."
That’s it for Day 2. I’m hyped to keep going.
Subscribe to my newsletter
Read articles from ABHISHEK UB directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

ABHISHEK UB
ABHISHEK UB
Aspiring AI Engineer | Fullstack Developer in progress | Growing passion for Data Science & building impactful tech.