๐ Day 2: Weights and Bias in Neural Networks - The Real Learners

In the previous post, we explored how neural networks are made of neurons and layers. But what makes a neural network actually learn from data? The answer lies in two key components: weights and bias.
What Are Weights?
Weights represent the importance of an input to a particular neuron.
When data flows from one neuron to another, it gets multiplied by a weight.
Think of it like adjusting the volume of a signal: a high weight amplifies it; a low (or negative) weight reduces or inverts it.
Example:
If input = 2 and weight = 0.5 โ output = 1
If input = 2 and weight = -1 โ output = -2
Every connection between neurons has its own weight.
What Is a Bias?
Bias is a constant value added to the weighted sum before the activation function is applied.
Think of it like adjusting the threshold of when a neuron should activate.
Without bias, your model may become too rigid and unable to fit real-world data properly.
Formula Recap: output = activation((w1 โ x1) + (w2 โ x2) + (w3 โ x3) + bias)
How Do Weights and Bias Learn?
For now, just know this:
Neural networks learn by adjusting weights and biases during training to reduce errors.
The exact process involving loss functions, gradients, and backpropagation will be explained clearly in the next articles.
Thanks for reading !
Subscribe to my newsletter
Read articles from Jahnavi Sri Kavya Bollimuntha directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
