By examining the prediction error on the training and evaluation data, we may assess whether a predictive model is underfitting or overfitting the training data. When your model performs poorly on the training data, it is underfitting the training data.
Overfitting is a modeling error that arises when a function is overly closely fitted to a small number of data points. Underfitting is defined as a model that cannot both model the training data and generalize to new data.
The model gets "overfitted" when it memorizes the noise and fits too closely to the training set, and it is unable to generalize adequately to new data. If a model is unable to generalize adequately to new data, it will be unable to accomplish the classification or prediction tasks for which it was designed.
Variance relates to how the model evolves when different parts of the training data set are used. Simply put, variance is the variability in model prediction—how much the ML function can change based on the data set. Variance comes from highly complex models with a large number of features.