Understanding the Basics of Linear Regression in Machine Learning


So far, I have come a long way from Andrew Ng's Course on Machine learning Specialization. Well, since I'm just starting obsidian, I'm going to do a quick note of what I have learnt so far and continue from there onwards.
Here are the things I've learnt so far.
What Machine Learning is
The broad types: Supervised, Unsupervised, and Reinforcement learning Currently, I'm at the Supervised Learning, which entails Linear Regression model and Logistic Regression model.
Linear Regression model:
$$f_{w,b}(x^{(i)})=wx^{(i)}+b$$
The cost function is used to ensure that the model's prediction is closer to the actual labels. it is given by:
$$J(w,b)=\frac{1}{2m}\sum_{i=0}^{m-1}(f_{w,b}(x^{(i)})-y^{(i)})^2$$
where J(w,b) is the cost function. m = number of instances. y = actual label, y[i] = f(x) = prediction of the model.
In selection of the model's parameter, Gradient Descent is used:
-Repeat until Convergence
$$\begin{aligned} & \{ \\ & w=w-\alpha\frac{\partial J(w,b)}{\partial w} \\ & b=b-\alpha\frac{\partial J(w,b)}{\partial b} \\ & \} \end{aligned}$$
where:
$$\begin{aligned} & \frac{\partial J(w,b)}{\partial w}=\frac{1}{m}\sum_{i=0}^{m-1}(f_{w,b}(x^{(i)})-y^{(i)})x^{(i)} \\ & \frac{\partial J(w,b)}{\partial b}=\frac{1}{m}\sum_{i=0}^{m-1}(f_{w,b}(x^{(i)})-y^{(i)}) \end{aligned}$$
So yes. This is a conceptual Recap of what I have learnt so far.
Subscribe to my newsletter
Read articles from Paul Omagbemi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Paul Omagbemi
Paul Omagbemi
Exploring the language of data, one algorithm at a time. Machine learning enthusiast, AI researcher, and advocate for tech-driven solutions to real-world challenges. Passionate about using AI for public safety and ethical technology. Join me as I document my journey through data, models, and insights.