Hello, I am (name) from LearnVern.
In a previous topic we saw How regressor model selection is done.
Now we are proceeding to see how we can evaluate the regression model?
Now we will evaluate this, on the basis of error rate.
If I have a certain output,
Let's take an example, if by running I was able to cover 1 km in 10 minutes, and my predictions say that I covered 2km in 10 minutes.
So, my actual output that is written on paper is 1 km, but my algorithm predicts it as 2 km, so the difference between 1 and 2 is 1, so this is an error.
So, how can we minimise this error?, so that the actual output or the observed data and the data that algorithm predicts matches perfectly, or it can come near by in values, so we try to reduce this by reducing the error. So we try to achieve this by reducing the error.
So, the regression model depends upon the error.
So, let's see this in detail, "a good Regression model is one where the difference between the actual and predicted values is small".
So, this is what I was just saying: the actual data that we have, which we also call it as observed value, and the output that the algorithm predicts, between them when there is less or no difference then we say that our regression model is performing the best.
So, this is our objective, and there should be no biasness,
Also this should perform like this on any new dataset also, not like that it worked only upon the data which I processed and trained it on, on that very same data only I am asking prediction, and the algorithm is also giving us perfect predictions, this is ok!
But only this should not be the case, as suppose if you are giving any new data, then, on that also is your algorithm giving the output perfectly.
So, if it is giving us that perfectly then good and if not then there is some issues, which you will have to deal by regularising, optimising, which is a different part that we will cover later on, so now we are talking about only performance, so we will see how these different methods will work.
So, here we have measures,
The first measure that you can see means absolute error, we call it MAE.
Second is, root means square error, which we call it as RMSE.
Coefficient and determination, we call it R square.
And last we have adjusted R square.
So, these are the ways which help us in measuring.
So, first we will begin by understanding the mean absolute error, that is MAE.
This is a very simple way, in this we remove the average of the errors, like we just saw in the example of running, where I ran 1 km according to the data, however the algorithm data says 2 km, so the difference held here is of 1 km ,so this is a huge difference.
So, like this as this was our one time data then we will take out the difference in the data again and again, uptil 100 of times, as to how much I ran in 10 minutes, then will take out the mean through difference value.
So, we will calculate the errors and then compute it, and after computing, will take out their average, and after this only we take out the mean absolute error, whose formula is here, that you can see.
So, 1 divided by N (meaning total number of datapoints), that will be divided by what? Sum of all, so y minus y cap, ok!.. as y is the actual value that is observed and y cap is the predicted value that the algorithm is giving.
So, y minus y cap we will have to take its absolute value, because sometimes what happens is that we did 1 minus 2, and we got minus 1, so we will have to take only its absolute value.
With how much magnitude is the difference, so minus or plus it is of 1 so we will have to reduce that.
So, this is the value of mean absolute error.
Now, next is the measure of Root mean square error,, in this we remove square root of; "average of squared difference between the prediction and actual value".
So, I will explain this to you, with a formula.
In this you will see, we will have to remove summation of predicted minus actual, so it's the same that we were talking about that is to remove the difference between the predicted and actual.
And, then it is squared.
Now, you might be thinking.
That we had first removed the actual value.
Because if you will consider negative and positive, and then when you will sum it, then many times your result will become zero.
So, if your result will only become zero, then what kind of calculations will you perform then,
So, that is the reason you will remove the negative term, how do we remove that?
So, in our previous MAE, we took the absolute value and removed the negative term, and only considered its magnitude.
So, in the same way, here we have squared this, so it's a good way, which will turn any negative value into positive.
So, here also you will see that magnitude is primarily focused.
So, summation of I is equal to 1 to n, predicted minus actual, it's square divided by N, so this is the mean squared error, and now as this is root mean square, so a square root is added here.
Right!
So, in this way, we take out our RMSE, that is Root Mean Squared Error, so here you will notice that it is talking about error, an error which our algorithm model's prediction performs.
So, both the formula earlier ones and this one also both are talking about the magnitude of that error,
Now, we will move ahead with a coefficient of determination, we call it as R square, it depicts how good your model is for the dataset.
So, How good is the algorithm model for the dataset?
So, it helps in knowing this.
So, what happens in this is that, once we test it..
You can see it's formula over here, SSR divided by SST, SSR meaning sum square of residual, so residual meaning error that we had already seen in above two formulas also, so residual meaning error.
So, the difference between actual and the predicted, so this difference is our residual or error,
So, sum square of residual divided by sum square of total, so when we take out this, then we understand what is the magnitude of that particular error.
So, this will give us a specific value for all the parameters, on whose basis we decide,
But in this there is a problem, that as and when we include more parameters so the value of r square also keeps increasing, and when it's value gets increased, that time we think now our model will perform better, but in reality it doesn't happen.
So, that is the reason we go on to adjusted R square.
So, what happens in adjusted R square is that, if you look at its formula, then you will see that it uses R square itself, so One minus R square into, N minus one, divided by N minus one minus p.
So, here n is the sample size,
And p is the number of regressors.
So, the value which we get from this is more valid and much better as compared to R square.
So, that is the reason we calculate adjusted R square also, and use that also in calculating performance of the model.
So, in this way, we saw formula wise as to the different measures that we have, which we can use to calculate the performance of the models.
So, we will stop today's session, and it's further parts we will cover in the next session.
So, keep learning and remain motivated.
Thank you.
If you have any queries or comments, click the discussion button below the video and post there. This way, you will be able to connect to fellow learners and discuss the course. Also, Our Team will try to solve your query.
Share a personalized message with your friends.