How to Implement a Basic Neural Network from Scratch Using Python
Neural networks are the backbone of deep learning and have revolutionized fields such as computer vision, natural language processing, and more. While powerful libraries like TensorFlow and PyTorch make it easy to build complex models, understanding how to implement a basic neural network from scratch can provide valuable insights into how these models work under the hood.
In this tutorial, we will walk through the steps to create a simple feedforward neural network using Python, without relying on any deep learning libraries. We'll implement the forward pass, backpropagation, and training loop manually.
Overview of the Neural Network
A basic neural network consists of layers of neurons that are connected by weights. The network we will build will have:
Input Layer: Takes the input features.
Hidden Layer: A single hidden layer with a configurable number of neurons.
Output Layer: Provides the final output predictions.
We will use the sigmoid activation function for the hidden layer and a binary cross-entropy loss function for training on a binary classification problem.
Step 1: Import Libraries
We'll start by importing the necessary libraries. For this implementation, we only need NumPy.
import numpy as np
Step 2: Initialize the Neural Network
Let's define a simple neural network class with an initializer to set up weights and biases.
class SimpleNeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
# Initialize weights with random values
self.weights_input_hidden = np.random.randn(input_size, hidden_size)
self.weights_hidden_output = np.random.randn(hidden_size, output_size)
# Initialize biases with zeros
self.bias_hidden = np.zeros((1, hidden_size))
self.bias_output = np.zeros((1, output_size))
Here, input_size
is the number of input features, hidden_size
is the number of neurons in the hidden layer, and output_size
is the number of output classes (1 for binary classification).
Step 3: Define Activation and Loss Functions
We'll use the sigmoid function as our activation function and binary cross-entropy for the loss.
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)
def binary_cross_entropy(y_true, y_pred):
return -np.mean(y_true * np.log(y_pred) + (1 - y_true) * np.log(1 - y_pred))
Step 4: Implement the Forward Pass
The forward pass computes the output of the neural network for a given input.
class SimpleNeuralNetwork:
# ... (same __init__ method)
def forward(self, X):
# Compute hidden layer activation
self.hidden_input = np.dot(X, self.weights_input_hidden) + self.bias_hidden
self.hidden_output = sigmoid(self.hidden_input)
# Compute output layer activation
self.output_input = np.dot(self.hidden_output, self.weights_hidden_output) + self.bias_output
self.output = sigmoid(self.output_input)
return self.output
Step 5: Implement Backpropagation
Backpropagation calculates the gradients of the loss with respect to each weight, which will be used to update the weights.
class SimpleNeuralNetwork:
# ... (same __init__ and forward methods)
def backward(self, X, y, learning_rate):
# Calculate the error in the output
output_error = self.output - y
output_delta = output_error * sigmoid_derivative(self.output)
# Calculate the error in the hidden layer
hidden_error = output_delta.dot(self.weights_hidden_output.T)
hidden_delta = hidden_error * sigmoid_derivative(self.hidden_output)
# Update the weights and biases
self.weights_hidden_output -= self.hidden_output.T.dot(output_delta) * learning_rate
self.bias_output -= np.sum(output_delta, axis=0, keepdims=True) * learning_rate
self.weights_input_hidden -= X.T.dot(hidden_delta) * learning_rate
self.bias_hidden -= np.sum(hidden_delta, axis=0, keepdims=True) * learning_rate
Step 6: Train the Neural Network
Now we need a method to train the network using forward and backward passes.
class SimpleNeuralNetwork:
# ... (same __init__, forward, and backward methods)
def train(self, X, y, epochs, learning_rate):
for epoch in range(epochs):
# Forward pass
self.forward(X)
# Backward pass
self.backward(X, y, learning_rate)
# Compute the loss
loss = binary_cross_entropy(y, self.output)
if epoch % 100 == 0:
print(f"Epoch {epoch}, Loss: {loss}")
Step 7: Test the Neural Network
To test the neural network, we'll create some dummy data. Let's create a simple dataset for binary classification.
# Dummy data for XOR problem
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
# Initialize and train the neural network
nn = SimpleNeuralNetwork(input_size=2, hidden_size=2, output_size=1)
nn.train(X, y, epochs=10000, learning_rate=0.1)
# Test the neural network
output = nn.forward(X)
print("Predictions:")
print(output)
Conclusion
Subscribe to my newsletter
Read articles from ByteScrum Technologies directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
ByteScrum Technologies
ByteScrum Technologies
Our company comprises seasoned professionals, each an expert in their field. Customer satisfaction is our top priority, exceeding clients' needs. We ensure competitive pricing and quality in web and mobile development without compromise.