Road to ChatGPT - Part 1: Understanding the Basics of Linear Regression

JessenJessen
26 min read

Yo, AI’s a mind that don’t ever sleep, stackin’ patterns and data, runnin’ deep.
It don’t hustle like us, but it’s sharp on the grind, tech so smooth, it’ll blow your mind.

Have you ever wondered how ChatGPT can talk about AI in a Snoop Dogg like style, as seen in the example above? In this series, we’ll go from basic machine learning to help build our intuition, before implementing the models used to power ChatGPT. Along the way, we’ll learn the basics of PyTorch, a popular machine learning library to simplify our code and enable easier implementations of more complex models.

Why Use Machine Learning?

When a computer is required to perform a task, we might write a function mapping an input to an output to enable the task. For instance, to predict house value based on the number of bedrooms, we might write a function such as

def predict_house_value(bedrooms):
    return bedrooms*100000

This function takes in the number of bedrooms, multiplied by a constant, and outputs a house value. As a predictor of house value, it’s pretty limited, we could take in more input parameters to calculate a more accurate house value.

def predict_house_value_v2(bedrooms, city, garden):
    # Adjust house value based on whether there is a garden
    garden_increase = 2000 if garden else 0

    # Adjust house value based on city
    city_multipliers = {
        "London": 1.2,
        "New York": 1.5,
        "Tokyo": 1.8
    }
    city_multiple = city_multipliers.get(city, 1.0)

    return garden_increase + city_multiple * 100000 * bedrooms

The function is gradually refining its output, returning house values that better reflect the diverse range of values we encounter. However, it remains highly inaccurate and would necessitate additional parameters to function effectively or restrict its application to specific regions and require extensive domain knowledge to comprehend how each input influences the output. The algorithm also potentially misses interplay between the 3 input parameters. For example, gardens could have more value in different cities. You can imagine this as if statements capturing the complex relationships by ANDing input parameters, however try to imagine this now with 50 cities and/or more input parameters.

Machine Learning allows us to take preexisting data and create the mappings represented by our functions for us. For example in the house value example, we can use data on various houses with features such as the number of bedrooms, proximity to schools, location and a house value associated with it.

We then select an appropriate machine learning model to train. During the training phase, we take each house one at a time, load the features, and the dependent variable which in this case is the house value. While training, the model will learn to predict house values based on features it has seen, by adjusting internal parameters of the model. After training, we can infer house values of houses not seen in the training data by supplying different combinations of features e.g 5 bedrooms, not near a school, New York, etc and the model will predict the house as it has learned to map input features to predicted house values.

To help us understand how models learn and why their internal adjustments create a desired output we'll start with linear regression. Later we will explore how we can extend concepts in linear regression to more complex architectures such as neural networks, which can approximate a variety of computational tasks and finally transformers which will allow you to create Snoop-Dogg style text like at the top of the article.

Linear Regression

Linear Regression is a method used to model the relationship between a dependent variable and one or more features. The dependent variable should be a continuous value when using linear regression. For instance, if we want to predict the value of a house based on its size, we can create a linear model where house value is the dependent variable and house size is the feature. This model would help us to understand and predict how changes in the size of a house might influence its value, allowing us to make predictions about future house values based on size.

We can use a 2D plot with house size on the x-axis and house value on the y-axis, to help us visualize the model.

Scatter plot showing a positive correlation between house size and house value. The data points are upward sloping, indicating larger houses tend to have higher values.

Plotting the data shows a clear correlation between house value and house size. Drawing a line through the data will help us to approximate other house values based on their size. With this line, we can pick a house size on the x-axis, and see what y value intersects with x on the line. This line would be our model.

Scatter plot showing house size vs. house value with a red trend line indicating positive correlation.

The line drawn above is one we could draw by hand and intuitively know it is a near optimal line to predict house values based on the data. This is based on it going through the centre of the plotted data.

Training a linear regression models allows us to find that line programatically. However if we can easily draw the line, why bother. Well, we can have many input features, and each feature is another axis on our plot. That however becomes more difficult to visualize in 3D and not possible above 3 dimensions. In real projects we will most likely be dealing with multiples features and therefore need a way to determine what this line is. For the time being though we’ll stick with the 2 dimensions, house value and house size while learning.

Model line

The line drawn through our data represent our model. It allows us to infer values for inputs not yet seen in a dataset. Each line can be represented with the following equation

$$y=w_1 x_1+b$$

  • y is our dependent variable, house value, the value we are trying to predict

  • x₁ is an input feature, house size. We use subscript 1, because we can have many input features which would be labeled x₂, x₃… etc. These other features could represent other house properties such as number of floors, or proximity to schools. However for our simple case we have one input feature.

  • w₁ is a weight. This is a constant value that is determined during training. Again this has a subscript 1 because there will be as many weights as there are features i.e. there is a weight for each feature category.

  • b represents a bias value. This is another constant value determined through training. It is not multiplied by any feature but simply added onto the final result.

w₁ and b are constants that are learnt through training and these are the values that decide the gradient of our model. x₁ would be a variable that is inputted for each calculation, in this case the house size.

Three scatter plots showing data points with different linear regression lines: the first with slope 1 and intercept 0, the second with slope 1 and intercept 1, and the third with slope 2 and intercept 0.

From the above plots we can see how different values for w₁ and b produce different models, some clearly better than others.

We can see w₁ affects the gradient, this is because it is a weight being applied against an input value such as house size. The greater the weight the more influence it has on the dependent variable y such as house value. When the weight is greater, the slope is steeper as the feature influences the dependent variable more. When the weight is negative there is a downward slope as the feature has a negative affect on the dependent variable.

When we change b, we see the line intersects the y-axis at a different point. For example at zero, it intersects where y=0, when b is 1, it intersects where y=1. Without this bias value, the model would have less flexibility as y would always have to be zero when x is zero.

What makes a good model

Now we know what parts make up a linear regression model, how do we know we have good model. Visually we can see there are better models than others. The best model is the one that minimizes the difference between predicted values and actual values.

From the above image, we can see there is a difference between what a model predicts for a house size and what the actual house value is for a given house size. If our model was the perfect predictor and predictions were equal to the actual value, the sum of all the differences would be zero. As predictions start to deviate from actual values the sum of all absolute differences increases and our model becomes potentially less accurate.

The loss function

We can use the MSE (Mean Square Error) formula to represent those differences. This is called a loss function, which aggregates the the difference between all actual and predicted values. Each individual difference is an error value.

$$\text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2$$

The MSE will give us an overall loss value, where a lower loss value represents a better model. i.e. a loss of zero is a perfect predictor, meaning all predictions perfectly match the actual values.

  • n is the number of instances we have e.g if we have data for 100 house, n is 100

  • i is a specific instance e.g. in our house data, if we took the 2nd example i would be 2

  • ŷ (pronounced y hat) is our predicted value e.g. the first house in our dataset might have a predicted value of $100k in our model

  • y is our actual value e.g. the first house in our dataset might have an actual value of $120k

The MSE calculates the difference between all predicted and actual values for all n instances and squares each one. Squaring the result adds a greater penalty to predictions further away from actual values, whilst also making each value positive.

Making each value positive ensures minus values can’t cancel out other values when summing the values.

How do we train a good model

To select good values for our weights and bias which minimizes the loss from the MSE we must train our model using a dataset. From now on we’ll refer to weights and biases collectively as parameters.

We train with the following steps

  • initialize our parameters (weights and bias) to random values

  • calculate predictions for every sample we have based on the parameter values we chosen for our set of samples, calculate the mse

  • based on the mse value, adjust our parameters, in a way which favors a lower loss value (we'll cover this in more depth later)

  • with our new parameter values, we calculate predictions again, calculate the mse, and adjust parameters.

  • continue the above steps, of adjusting parameters, calculating mse until our loss is acceptably small or the changes in the mse start to slow down with each iteration or for a set number of iterations

We'll go through these steps in code below with real data using the California Housing dataset. In this dataset, each row represents a district in California, and the target variable (the output) is the median house value of owner occupied homes, expressed in hundreds of thousands of dollars. As there is more than one feature, we’ll have more than than 2 dimensions, and therefore won’t be able to easily visualize with graphs as before.

Putting it all together

Data importing and exploration

Below we'll use linear regression on a real dataset. The California Housing dataset collects data on housing per district in California. Each district contains features such as population, average number of bedrooms etc. Using linear regression, we'll predict the Medium House Value per district based on the the other fields in the dataset.

# Import libraries that allow us to download the California housing dataset
# and view the data
from sklearn.datasets import fetch_california_housing
import pandas as pd

# Load the dataset
california_housing = fetch_california_housing()

# Convert to a DataFrame, this will allow us to view and access data easily
data = pd.DataFrame(california_housing.data, columns=california_housing.feature_names)

# Create MedHouseVal column which represents the Median House Value per a district
# This is the y value we will try to predict
data['MedHouseVal'] = california_housing.target

# Display the first few rows of the dataset
print(data.head())

The output from data.head() allows us to quickly inspect the dataset so we can get a feel of what the data looks like.

MedInc  HouseAge  AveRooms  AveBedrms  Population  AveOccup  Latitude  \
0  8.3252      41.0  6.984127   1.023810       322.0  2.555556     37.88   
1  8.3014      21.0  6.238137   0.971880      2401.0  2.109842     37.86   
2  7.2574      52.0  8.288136   1.073446       496.0  2.802260     37.85   
3  5.6431      52.0  5.817352   1.073059       558.0  2.547945     37.85   
4  3.8462      52.0  6.281853   1.081081       565.0  2.181467     37.85   

   Longitude  MedHouseVal  
0    -122.23        4.526  
1    -122.22        3.585  
2    -122.24        3.521  
3    -122.25        3.413  
4    -122.25        3.422

Scaling features

Next, we need to preprocess our data by scaling the features. Scaling adjusts the range of each feature to be similar, which prevents any single feature from dominating the training process due to its large scale. This also aids in faster convergence of the model during training.

import numpy as np

# We'll create our input feature dataset X and a corresponding Y dataset 
# for the data we want to predict
# We'll also convert the datasets to numpy which is better for processing, 
# where Pandas dataframes are good for data exploration
x_unprocessed = data.drop("MedHouseVal", axis=1).to_numpy()
y = data["MedHouseVal"].to_numpy()

# Scale features in X using Z-score normalization
# This preserves the distribution of the data and ensures all
# features have a mean of 0 and a standard deviation of 1
x_mean = np.mean(x_unprocessed, axis=0)
x_std = np.std(x_unprocessed, axis=0)
x = (x_unprocessed - x_mean)/x_std

The above scaling, known as standardization or Z-score normalization, ensures that features with large ranges do not overshadow others in influencing the model's predictions. Without scaling, features that have large ranges might result in smaller weights, as their input values are already greatly influencing the dependent value due to their size relative to other features. Over time, training will reach the correct weights, however, unscaled features can cause inefficient optimization paths, leading to slower convergence.

If all of our features are on similar scales (e.g., 1-100), then we can leave scaling out, as no feature will dominate due to scale.

Note we did not scale y, as y is not included in the learning process and a weight is not assigned to it, as it is the value we are trying to predict.

Test and validation datasets

After exploring the data and pre-processing it, we need to split the dataset into a train and validation set. A typical split would include 80% of the data in the training set and the remaining 20% in the validation set. This split is commonly used because it provides enough data to train the model effectively whilst also keeping a sufficient portion to evaluate its performance. However, this ratio might change depending on the size of the dataset or the requirements of the specific task. Larger datasets might allow for smaller validation sets, such as 90/10 splits.

The training set is responsible for training the model. When we calculate the MSE at the end of a training iteration, ideally, we want as low a number as possible. However, if we achieve zero MSE and therefore perfect accuracy, there is a high chance our model has learned to predict the data in the training set but has not generalized to unseen data. This is also known as the model overfitting the data.

To test that our model generalizes to data outside of the training set, we hold back some data from the initial dataset to create a validation set. With the validation set, after we have trained the model, we calculate the MSE on this set. If the loss is low, indicating predictions are close to the observed data in the validation set, it is likely we have a generalized model, meaning it performs well on unseen data and not just the validation set. However, a low validation error alone does not guarantee generalization and might require additional testing on unseen data to confirm the model's robustness.

# optional seed, setting this makes our random calculations reproducible
np.random.seed(42)
# get the number of samples, shape returns the dimensions of the matrix
# specifying 0, fetches only the number of rows
# a random order is generated, in case the order of our data 
# skews the distrubtion in some way
n_samples = x.shape[0]

# this will generate an array of numbers from 0 to the number of rows minus 1, in a random order
# we'll use this to select rows from our datasets with a random order that is set once
indices = np.random.permutation(n_samples)

# define the percentage of samples that should be used in a training set
train_ratio = 0.8
n_train = int(train_ratio * n_samples)

# create the training and validation datasets
train_indices = indices[:n_train]
val_indices = indices[n_train:]
x_train, y_train = x[train_indices], y[train_indices]
x_val, y_val = x[val_indices], y[val_indices]

print(f"Training set size: {x_train.shape}")
print(f"Validation set size: {x_val.shape}")
Training set size: (16512, 8)
Validation set size: (4128, 8)

Weight initialisation

Next we create a weight vector to represent the weights applied to each feature. We need as many weights as there are features and an additional bias value, which allows the model to adjust the output independently of the feature values. Random small values are used for weight initialization.

We also choose small initialization as large numbers can cause a feature to contribute too much to predictions, causing large changes to the weights across iterations. The large changes could become unstable, oscillating around the optimum value. Instead of gently moving towards an optimum point, the weight values might swing wildly around it, slowing convergence.

# initialise an array of random weights for each feature
# x.shape(1), will give us the number of columns which represent features
# we add 1 to the number of features to account for the bias term that will be trained
n_weight = x_train.shape[1]+1
weights = np.random.randn(n_weight) * 0.01
print("Initial weights:", weights)

# we will also add a feature column of ones, which will be used for the bias value later
# np.c_ will horizontally concatenate 2 matrices e.g. x_train and a generated 1 column matrix of 1s
x_train = np.c_[x_train, np.ones(x_train.shape[0])]
x_val = np.c_[x_val, np.ones(x_val.shape[0])]

Example output, yours will be different due to random initialization.

Initial weights: [-0.00454508  0.00186784 -0.01296609  0.00670622 -0.00369122  0.00874647
 -0.00619931 -0.00674114 -0.00945463]

The training loop

Now we begin the training loop, an iterative process aimed at finding the optimal weights. In the training loop we will calculate predictions for each sample using our random weights, then calculate the loss for all predictions.

To visualize the loss, we can plot a graph where y is the loss value, and each other axis corresponds to a weight and its current value. This graph provides insights into how different weight values influence the loss, helping to identify trends such as whether certain weights significantly reduce or increase the loss, and guiding adjustments toward optimal values. For all weight values, there is an intersection point where y reaches its minimum. The minima is where the weight values at this point produce predictions for all sample that are the closest possible value to the actual value. This is the value we aim to find.

The above shows what that graph might look like with one weight, as two dimensions are easier to reason with. Visualizing beyond three dimensions is not possible because our spatial intuition is limited to three dimensions, making it challenging to interpret higher dimensional spaces. As shown in the graph, there's an optimal value for w₁ where y is at its lowest. The lower y is, the smaller the difference between the predicted values and actual values for each sample. Here, y refers to the loss value, which quantifies these differences and serves as the key metric in the optimization process. By minimizing the loss value, we iteratively adjust the weights to improve the model's accuracy in predicting outcomes.

Gradient Descent

# initialise an array of random weights for each feature
# x.shape(1), will give us the number of columns which represent features
# we add 1 to the number of features to account for a bias value
n_weight = x_train.shape[1] + 1
weights = np.random.randn(n_weight) * 0.01
print("Initial weights:", weights)

# we will also add a feature column of ones, which will be used for the bias value later
# np.c_ will horizontally concatenate 2 matrices e.g. x_train and a generated 1 column matrix of 1s
x_train = np.c_[x_train, np.ones(x_train.shape[0])]
x_val = np.c_[x_val, np.ones(x_val.shape[0])]

Above we've initialised our parameters with random values to provide a starting point.

To move these random initializations to the optimal value, we need to move the weights towards the point where the loss is at it's minimum. To compute this, we can calculate the partial derivative for w₁ with respect to the MSE function. The partial derivative for our weight will tell us the gradient of our weight for a particular loss value i.e it will tell us how much the loss is affected as we adjust our weight.

What’s a partial derivative
If you're unfamiliar with derivatives, you can explore them at Math is Fun. However, it's perfectly fine if you don't know them. Essentially, derivatives help us determine the steepness and direction of a graph at a specific point. A larger derivative value indicates a steeper gradient, with negative values indicating a downward slope and positive values an upward slope. In the context of machine learning, we use derivatives to calculate how much the Mean Square Error (MSE) changes concerning one of its parameters, like a weight. This is done by deriving a formula for the partial derivative, which helps us adjust weights to minimize the MSE. In this article, we'll provide the formulas for calculating partial derivatives for each weight, but in future articles, machine learning frameworks will handle these calculations automatically. So, it's okay to skip over the details of derivatives for now.

In the code below we update the weight by subtracting the gradient for the current value of of w₁ and all other parameters.

w1 -= w1_gradiant

This will allow it to go towards the minima regardless of being a downward or upward gradient. This process of updating iteratively updating the weights by calculating gradients is called Gradient Descent. Below shows our initial point and several iterative updates. You'll see the jumps are quite large and can overshoot the optimum value, causing the value to oscillate around the optimum and slowing convergence, or it might not converge at all.

To smooth out these updates, we use a hyperparameter called the learning rate represented by α (alpha).

What’s a hyperparameter
A hyperparameter is a meta-parameter that controls how the learning algorithm learns. For example, setting the learning rate (α) to a higher value might result in faster initial convergence to weight values, but as the algorithm approaches the optimum, it can become unstable. On the other hand, setting it to a lower value can make the model learn too slowly, potentially wasting computational resources.

Let’s replace our previous code line with the one below, taking into account the learning rate.

learning_rate = 0.01
w₁ -= w₁_gradient * learning_rate

Above we simply multiply the gradient by a learning rate to decrease the step size and smooth out the update, as seen on the graph below.

However setting α too low will result in more updates to achieve the optimum weight, resulting in a slower convergence and extra compute. Setting this too large results in the problems we see without using a learning rate to scale the gradient update.

A few sections back we scaled our features ensuring they were within similar ranges with a mean of 0 and a standard deviation of 1. Without scaling, weight updates for features with large ranges such as house values would have far greater jumps compared to a feature with a smaller range such as number of rooms. As the learning rate is the same for all features, scaling our features to be within similar ranges allows them to take similarly scaled weight updates resulting in faster convergence.

Features with a large scale will also dominate the learning process, as the feature would contribute significantly more than other smaller scale features, due to the larger numbers. You might think it's fine if larger features contribute more to the weight updates early on, since they’ll eventually take smaller steps as the gradient shrinks. However, without feature scaling, the optimization process becomes uneven. Large features dominate the updates initially, but smaller features may take much longer to make meaningful progress. This imbalance can lead to slower convergence overall, as the model takes a less efficient path towards the optimal solution. By scaling all features to similar ranges, we ensure that all features contribute to the weight updates more evenly, leading to a faster and smoother convergence.

Once we've updated our weights, we repeat the process, of calculating predictions, calculating the loss then updating weights using partial derivatives. This iterative process is known as gradient descent. We can carry on for a set number of iterations known as epochs or until the loss starts to start to slow down and stabilize across epochs, signaling that our model has stopped learning.

This process of iteratively updating the parameters proportional to the gradient of a loss function is called Gradient Descent. It is a foundational optimization method in machine learning, enabling efficient minimization of loss functions, and it will play a crucial role in more advanced algorithms we explore later.

# set the number of epochs to train for, 100 is good enough for this simple
# example, however a better approach would be stop learning once we see
# little change in the error metrics between epochs
n_epochs = 100
# set the learning rate, 0.1 is generally a good starting place, however
# later examples will adjust this during the learning phase to achieve
# faster convergence 
lr = 0.01

# the main training loop
for epoch in range(n_epochs):
    # calculate predictions using the dot product of two matrices, 
    # the feature matrix and the weight vector
    predictions_train = np.dot(x_train, weights)
    error = predictions_train - y_train
    gradients = 2 * (x_train * error[:, np.newaxis]).mean(axis=0)
    weights -= lr * gradients

    # calculate the mean squared error
    mse_train = np.mean(error ** 2)

    # calculate predictions on validation set
    predictions_val = np.dot(x_val, weights)
    mse_val = np.mean((predictions_val - y_val) ** 2)

    print(f"Epoch {epoch}: Training Error = {mse_train:.6f}, Validation Error = {mse_val:.6f} ")
Epoch 0: Training Error = 5.637618, Validation Error = 5.468886 
Epoch 1: Training Error = 5.438583, Validation Error = 5.277734 
Epoch 2: Training Error = 5.247518, Validation Error = 5.094238 
Epoch 3: Training Error = 5.064100, Validation Error = 4.918087 
Epoch 4: Training Error = 4.888018, Validation Error = 4.748984 
Epoch 5: Training Error = 4.718977, Validation Error = 4.586642 
........................
........................
Epoch 95: Training Error = 0.726476, Validation Error = 0.753565 
Epoch 96: Training Error = 0.721906, Validation Error = 0.749195 
Epoch 97: Training Error = 0.717503, Validation Error = 0.744986 
Epoch 98: Training Error = 0.713263, Validation Error = 0.740932 
Epoch 99: Training Error = 0.709177, Validation Error = 0.737027

The above output is an example we might see from the training loop. We can see at the start we have an a high error value, but training over 100 epochs brings the error rate down considerably. While it does start to slow down towards the end of the training phase, we are still seeing improvements epoch to epoch, and we could eke out better accuracy by continuing to train. However, there is a trade-off to consider, additional training may lead to marginal accuracy improvements, but it also increases the risk of overfitting, where the model performs well on training data but poorly on unseen data.

Vectorization

The training loops also make use of vectorized methods and NumPy, leveraging broadcasting when arrays have compatible but differing shapes, and element-wise operations when arrays have the same shape, to create short lines which are fast to run. Vectorized methods allow for SIMD (single instruction multiple data), which allows entire arrays to be processed in parallel by a CPU instruction, significantly reducing control overhead compared to traditional loops. Instead of iterating through each element individually, SIMD enables a single instruction to operate on multiple data points simultaneously, maximizing parallelism and improving performance.

Normally we iterate through an array which requires control instructions to loop through each element, then an instruction that must be applied to each element individually. Vectorized methods remove the need for many control instructions. Memory fetches are also minimized as instructions are performed on chunks of data in registers rather than single elements of an array.

Vectorized functions, especially in NumPy, are written in optimized compiled C code rather than Python, further enhancing performance. In our case rather, than iterating through our array vector and multiplying by each feature for each sample we can calculate them all at once. For this we use the dot product.

What is the dot product
The dot product is an operation which takes two vectors, and multiplies them element wise, then sums them to create a single scalar. In linear regression this is used to quickly take input features, multiply by the trained weights then sum them to create the output value.

Below compares the use of iterative methods vs vectorized methods.

def iterative_predictions(x, weights):
    predictions = np.zeros(x.shape[0])
    for i in range(x.shape[0]):
        prediction = 0.0
        for j in range(x.shape[1]):
            prediction += x[i][j] * weights[j]
        predictions[i] = prediction
    return predictions

def vectorized_predictions(x, weights):
    return np.dot(x, weights)

# Time the iterative method
start_time = time.time()
predictions_iterative = iterative_predictions(x_train, weights)
iterative_time = time.time() - start_time
print(f"Iterative method took {iterative_time:.6f} seconds")

# Time the vectorized method
start_time = time.time()
predictions_vectorized = vectorized_predictions(x_train, weights)
vectorized_time = time.time() - start_time
print(f"Vectorized method took {vectorized_time:.6f} seconds")
Iterative method took 0.031959 seconds
Vectorized method took 0.000339 seconds

As we can see above the vectorized method is orders of magnitude faster using the vectorized dot product method.

As well as highly optimized methods such as the dot product. NumPy applies element wise operations on arrays using SIMD.

error = predictions_train - y_train

When performing a calculation such as the one above between two arrays of the same size, NumPy automatically performs the operation element-wise between the two arrays. This means an element of an array is matched with it's counterpart in a corresponding matrix and a calculation is completed as if they were single scalar values. For example, with predictions_train and y_train, which are both the same size, NumPy will subtract the corresponding elements (first element of predictions_train minus the first element of y_train, and so on), ultimately returning a new array of results.

NumPy can also handle arrays or matrices of differing sizes using a method called Broadcasting if they meet certain rules to be "broadcast-compatible."

These rules are as follows:

  • Rank Compatibility: Each matrix must have the same number of dimensions.

  • Dimension Compatibility: Dimensions are compared from right to left. If the size in each dimension are the same or if one of them is 1, then the dimension is considered compatible, and the operation can proceed. If one dimension is 1, that value is broadcasted across the larger dimension.

For example, if we have an array with shape (3, 1) and another array with shape (3, 4), NumPy will broadcast the single value across the dimension where 1 occurs to perform the element-wise calculation, effectively treating the first array as if it had the shape (3, 4).

Later in this series, we'll see how we can push these calculations to a GPU, which is designed specifically for massive parallel computations, allowing us to achieve even faster processing.

Limitations of Linear Regression

As the name suggests, linear regression is linear, which means it struggles when we need to model non-linear relationships. We only increase or decrease a feature by a magnitude, therefore it’s ability to model complex relationship is limited. e.g if we could apply exponents or polynomials to an input feature we could start to see non linear relationships.

A scatter plot with points distributed in an overall U shape, with a dense cluster in the top left and another in the lower right, forming a curve that dips in the middle.

For example, in the dataset shown above, a straight line from our linear model wouldn’t be able to capture the complex shape we see. While factors like exponents would allow for more complex lines, methods to learn those and combine several to create models would be limited. To model non linear relationships there are more appropriate models which do not involve learning these extra components.

Linear models are also limited in their ability to model relationships between input features. For instance, in our California Housing Dataset, if 1 bedroom houses when they are very old significantly increase the median income value, a linear model would struggle to capture this relationship, as it treats each feature independently and multiplies it by a weight.

Summary

Linear regression might be less popular than Transformers, Neural Networks, and other advanced models, but if a simple linear model is sufficient, it’s quick and easy to train.

You might then be wondering why we covered linear regression in the first place. The reason is that linear regression provides a solid foundation for understanding key concepts such as training with gradient descent, loss functions, and how weights influence a model's output. In linear regression, we use a mathematical function that takes some input features and has adjustable parameters known as weights. We apply a loss function to the model's predictions and then use gradient descent to adjust the weights until the loss function is appropriately minimized.

This approach to training can be extended to more complex models, however the core idea remains the same: we adjust parameters to minimize a loss function. The difference lies in the mathematical function we use, linear regression is simple, but we can replace it with more flexible models that can handle complex relationships, and help predict different types of data.

The next model we’ll intro are Neural Networks as our new mathematical function. Neural Networks allow us to model non-linear relationships and dependencies between features, enabling us to tackle much more complex datasets.

Complete Code

0
Subscribe to my newsletter

Read articles from Jessen directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jessen
Jessen