📘 Day 4 - Neurons, Nature’s Computing Machines (from Make Your Own Neural Network)

ABHISHEK UBABHISHEK UB
3 min read

🧠 Why Animal Brains Still Beat Computers (and What AI Is Learning From It)

We already know scientists were mesmerized by animal brains. Like, let’s talk about a pigeon — the brain is very small, but it can handle complex and sophisticated tasks like hunting, evading danger, and changing direction according to the wind. But on the other hand, even though we have so many resources for computers to run, we still couldn’t achieve the accuracy of a biological neuron or what animals did.


🧬 Neurons vs Computers: A Numbers Game?

If we talk about neurons — the neuron has a dendrite which accepts the signals and passes it through the axon and outputs from terminals. A human brain has 100 billion neurons. A fruit fly has 1 lakh. This 1 lakh is well within the realm of modern computers, but the computers still lack the accuracy.

Scientists were always puzzled why biological neurons were always accurate and able to do tasks with less computation compared to modern computer programs. But with the knowledge they had, scientists were able to fetch some insights. It’s like a lot of neurons make a neural system — so it’s like the classifiers and predictors we looked at before, which took an input, did some computation, and gave us outputs.


❌ Are Neurons Just Linear Functions? Nope.

So can we put neurons as linear functions? NOPE.
Observations suggest that neurons are not linear — they don’t react to anything. It’s like a threshold which waits for some time to meet some requirements and then gives us output. It's called an activation function.

There are a lot of activation functions, but we’ll look at sigmoid — it’s more like an S-shaped function. Note that in nature, there are hardly any sharp-shaped functions, so yeah, we stick with sigmoid. Now the expression is like:
y = 1 / (1 + e^(-x))


🧱 Neural Networks: Layers That Learn

We combine neurons as layers to make the neural network — like layers of neurons which can do tasks and give us outputs.
Where does the learning happen though?
We don’t change the neurons — we change the weights between them and criss-cross the connections to make the neural network perform, understand, and learn.

  • Low weight = weak signal → no output, since the threshold is not activated.

  • High weight = strong signal.


💬 WhatsApp Group = Neural Network?

Imagine you're in a WhatsApp group (neural network 😜):

Each person (neuron) only replies if enough people mention something.

The more strongly someone’s message connects with you (weight), the more likely you are to reply.

And the “sigmoid” is like your mood filter — you reply softly or loudly based on how many messages hit you.


🧠 Why Copy the Brain?

Biological brains tell us that it’s not always about speed or quantity — it’s about clever connections and smart processing.
That’s why AI is copying the brain — the logic.

That’s it for Day 1. I’m hyped to keep going.

0
Subscribe to my newsletter

Read articles from ABHISHEK UB directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

ABHISHEK UB
ABHISHEK UB

Aspiring AI Engineer | Fullstack Developer in progress | Growing passion for Data Science & building impactful tech.