So, this session is the continuation of our previous session,
So, let's see,
In today's session we are going to learn about the Naive (naa-ee-v) Bayes algorithm.
And today's data set will be a social ads data set.
So at first let me upload this data set, and here you can see this data set is been uploaded, now I will copy this path and close this window,
So we're going to perform our first step, which is loading the data set.
And we are going to take the help of import pandas as PD. So, with the help of pandas , we will load the data set.
Now, we will create a data frame, so, we will form a frame named dataset, which is equal to PD dot read underscore C S V.
So with this function We will load the data set. So you can see my data set has been loaded.
Here R E A D (read).
Now if we want to view our data set then we can do that, so this is our data set.
So, in this data set we have 400 rows and 5 columns and you can see that we have user ID, gender, age, estimated salary and purchase.
So, purchased is our output column wherein it tells us whether the customer has purchased the car or did not purchase the car.
The meaning of zero is the customer has not purchased the car and the meaning of one is that the customer has purchased the car.
A particular company underwent a social campaign or advertisement and collected this data, wherein at last the company was able to to find out if the customer has purchased the car or not.
So you can see a 46 years female purchasing the car having only 41000 as salary, in the same way this 51 year male having 23000 salary, also was able to purchase the car.
On the other hand, here you can see that this is a 36 year male, having 33000 as salary which did not purchase the car.
So this is the basic meaning of the data.
Now we will work this data.
So let us see what are the steps that we will follow in data preprocessing.
So firstly, we will have to split the data between training and testing.
For that, X is equal we will use the dataset dot i Loc function, here we will take all the rows but we will take only the second and third columns.
The reason behind this is that we consider age and estimated salary only as relevant data because they can affect the output whereas the first that is user ID and the gender column these two have nothing to affect the output directly.
So age and estimated salary are the columns that will decide if the customer purchases the car or not.
So we took 2 and 3.
And here by putting V A L U E S (values) we will get the data in numpy array format.
Similarly for Y is equal to dataset dot i Loc, we want all the rows but columns only of the 4th, because we only require the output in Y, so fourth.
So we will execute this and then we can see X comma Y also.
So this is our X, having numpy arrays two columns, and this is our Y.
So, in this way our X and Y is splitted.
Now after dividing the data between input and output we will have to divide the data for training and testing as I have told you earlier.
So we will have to remove some data for training and testing from X and thereafter its mapping output will be again segregated.
For that we will take the train test split from the package of…model selection.
So we will use this model.
From SK learn dot model selection… import train underscore test split.
So, with the help of this, we are going to achieve this entirely.
So first we will have X underscore train then X underscore test along with that Y underscore train and y underscore test is equal to now here we will take the help of trainunderscoretestunderscoresplit and pass X and y and here we will mention the test size also that is 0.25, as this is generally kept between 20 to 30%, along with that we will also keep random state as zero, so this is also initialised.
So in this way, our data is being created.
So now we can see X underscore train and Y underscore train, so as the training record has 75% of the data so we will get 300 records.
And as testing data has got 25% of the records so we will get hundred records,
let us see,
X underscore test & Y underscore test.
So, you can see that this has got 100, which is 25% and the other 300 records have been received by this.
So, in this way we can see our data has been separated in X and Y format.
And then we have splitted the data between training and testing.
Now, we will move ahead and with the help of a standard scaler, we will scale this. Because the values are too high, we will scale this.
So, to do that we will have to use from S K learn dot preprocessing, so preprocessing, import and here we will import standard scaler,
Here, sc is equal to standard scaler.
Ok… let's proceed now..
So, we will put X underscore train is equal to sc dot fit transform and here we will put X underscore train again.
So, with this, our X train is transformed, you can see it has been scaled down.
Similarly, we will perform an X underscore test, X underscore test is equal to sc dot F I T fit underscore transform, and here again we will put X underscore test.
Now, you can see how we are getting the output.
So, both the X train and X test have been scaled down over here.
Now, as we are using a Naive Bayes algorithm, we will import that. (6 seconds pause)
So, now we will do model creation.
For model creation, from S K learn dot naive bayes, and from here we will import Gaussian N B
So, through GaussianNB, we will implement our classification model.
So GNB is equal to GaussianNB, so we will create an object for this, and through this only we will train our model.
our model is created, now we have to train it.
So, to train we know that fit function is used for training, and in this fit function we have to pass data that is X underscore train and Y underscore train.
So, in this way we gave training.
After this we will use Y pred variable for prediction,
So, Y pred is equal to G N B, which is a trained model, and with its help we will perform prediction, so G N B dot predict, and to predict we will use the data of X underscore test.
So, in this way we now have Y pred.
So, we already have observed data which is Y test, so we will display both for comparison between their similarities and differences.
So, here you can see the Y test.
So, we can see some errors, such as, 1 is predicted here but actually it is 0 only, but it's better that we remove its accuracy.
So, let's remove its accuracy.
So, for performance we will take an accuracy score, and also for classification we will take a confusion matrix…
So, we will remove these two details.
So, from S K learn dot metrics import, we will import accuracy underscore score… and confusion matrix.
So, we will use this.
Now, we will check what the accuracy score is.
Accuracy underscore score, and here we will have to put Y underscore test and Y underscore pred.
So, these two things help us in calculating.
So, 91 percent is the accuracy.
If you remember , SVM accuracy was 93, and before that it was 90, so this is better than logistic regression and KNN, and its accuracy is less than SVM, if you compare.
Now, here we will move ahead.
And use CF is equal to confusion matrix… in this we will pass Y underscore test and Y underscore pred.
And let’s view our confusion matrix.
So, this time we have 5 and 4 means as wrong predictions while last time we just had 6 or 7.
Now, let's visualise and see it, once, graphically as to which data points have been incorrectly classified.
For that, we will do it for test dataset
In that we have observed values and the other is a predicted value.
So, first we should have X that we will use for plotting, so underscore test off all the records and zero column.
And for Y, Y is equal to X underscore test all the records but column one.
And for colour we will take Y underscore test.
Now, we will use exactly the same commands for train,
And here for Y we will just change pred.
Now, our X and Y are fixed.
So, before plotting we will import Library
FromS K learn dot import pyplot…as PLT, ok so now we can usE…mat plot lib, from matplotlib.
From matplotlib dot import pyplot as PLT
After this, here we can use PLT dot scatter and here in it, we will use X and Y and give C is equal to colour, enter.
Now, you can see this is our original.
And same plotting we will do for pred,
PLT dot scatter X comma Y comma C is equal to C.
So, here at some places prediction done is yellow which actually was blue data points.
So, here also if you see there are 2 wrong predictions, one yellow is depicted as blue, and one blue is depicted as yellow.
So, we can see some mistakes, especially in the overlapping portions.
So, there are mistakes due to generalisation and it does not try to overfit.
So, in this way through visualisation we can understand it in a much better way.
If you have any queries or comments, click the discussion button below the video and post there. This way, you will be able to connect to fellow learners and discuss the course. Also, Our Team will try to solve your query.
Ruturaj Nivas Patil
Very well explained in entire course. Great course for everyone as it takes from scratch to advance level.