Underfitting, Overfitting—And Finally Recovering


So I have not been feeling well, but thankfully I just recovered. I'm just going to give a summary of this.
Underfitting: This occurs when the model doesn't fit the training set well. We can also say that there's a high bias thingy going on
"Normal": note that the normal is in quotation, since I'm not certain there's a term for it. But it's simply when the training set fits the model just fine. The beautiful thing about this is that the more model can be used and will work well on test set that are novel to it. Hence, making the model more generalizable.
Overfitting: This when the model fits the data extremely well -- too much of everything is bad isn't it? High variance also occurs here.
Subscribe to my newsletter
Read articles from Paul Omagbemi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Paul Omagbemi
Paul Omagbemi
Exploring the language of data, one algorithm at a time. Machine learning enthusiast, AI researcher, and advocate for tech-driven solutions to real-world challenges. Passionate about using AI for public safety and ethical technology. Join me as I document my journey through data, models, and insights.