Cross-validation in Machine Learning


Enhancing model quality with cross-validation in Machine Learning ๐ค
Cross-validation is a modelling process where the data is divided into multiple folds (pieces) and then we perform experiments by taking each fold at a time and considering it as a validation set and all the other folds combined as a training set.
The data can be divided into any number of folds. Say we have divided the data into three folds, then,
Experiment 1: Fold 1 โ validation set, Fold 2,3 โ training set
Experiment 2: Fold 2 โ validation set, Fold 1,3 โ training set
Experiment 3: Fold 3 โ validation set, Fold 1,2 โ training set
After performing all experiments on the #mlmodels, we get scores (model quality/model accuracy) for each experiment. We can take the average of it and consider the overall score.
This process is resource heavy and so it might not produce instant results for larger datasets as compared to smaller datasets. Overall it helps increase the model quality.
Subscribe to my newsletter
Read articles from Sahil Bhosale directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
