In this approach, the data is split into k partitions of approximately equal size. It will train k models and evaluate them using each partition. In each iteration, one partition will hold for testing, and the remaining k partitions are collectively used for training purposes. Classification accuracy will be the average of all accuracies. It also ensures that the model is not overfitting:
In stratified cross-validation, k partitions are divided with approximately the same class distribution. This means it preserves the percentages of each class in each partition.