Cross-validation is a method of testing machine learning models that involves training numerous models on subsets of the available input data and then evaluating them on a different subset of the data. To detect overfitting, or the failure to generalise a pattern, use cross-validation.
Cross Validation is a powerful tool for evaluating the performance of your model, especially when overfitting is a concern. It can also be used to determine your model's hyper parameters, or which parameters will result in the lowest test error.
Cross-validation is an extremely effective tool. It allows us to make better use of our data and provides us with a lot more information regarding the performance of our algorithms. It's easy to lose track of details in sophisticated machine learning models and end up using the same data in multiple stages of the pipeline.
Cross-validation is used to evaluate a model's ability to predict new data that was not used in its estimation, in order to identify issues such as overfitting or selection bias, as well as to provide insight into how the model will generalise to a different data set.
In general, cross validation is required when determining the model's optimal parameters, which in logistic regression would be the C parameter. The purpose of CV is to measure the generalisation performance and stability of the entire learning system, not to estimate parameters.