Course Content

  • Underfitting_and_overfitting

Course Content

FAQs

In data science, underfitting occurs when a data model is unable to effectively represent the link between input and output variables, resulting in a high error rate on both the training set and unknown data.

Overfitting is a statistical modelling error that arises when a function is too tightly fitted to a small number of data points. As a result, the model is only usable in relation to the data set it was created with, and not in relation to any other data sets.

When our machine learning model is unable to capture the underlying trend of the data, we call this underfitting. To prevent the model from overfitting, the feeding of training data can be halted at an early stage, otherwise the model may not learn enough from the training data.

Overfitting is more likely than underfitting to be detrimental. The reason for this is that overfitting has no true upper limit on the reduction of generalisation performance it might cause, whereas underfitting does. Consider a neural network or polynomial regression model, which are non-linear regression models.

Dropout guards against overfitting caused by a layer's "over-reliance" on a few inputs. Because these inputs aren't always available during training (i.e., they're dropped at random), the layer learns to employ all of them, resulting in better generalisation.

Recommended Courses

Share With Friend

Have a friend to whom you would want to share this course?

Download LearnVern App

App Preview Image
App QR Code Image
Code Scan or Download the app
Google Play Store
Apple App Store
598K+ Downloads
App Download Section Circle 1
4.57 Avg. Ratings
App Download Section Circle 2
15K+ Reviews
App Download Section Circle 3
  • Learn anywhere on the go
  • Get regular updates about your enrolled or new courses
  • Share content with your friends
  • Evaluate your progress through practice tests
  • No internet connection needed
  • Enroll for the webinar and join at the time of the webinar from anywhere