So, in our machine learning course, today we will be studying in continuation of our previous tutorial.
So, let us see.
So, in today's practical we are learning about support vector machines.
So, we had already discussed and learned about this concept of support vectors, along with that we have a kernel function , and with the help of its learning function we try to map the input and output.
Now, why is this function special and different from other algorithms?, because it transforms the input features into higher dimensional planes… and thereafter separates them.
So, in this way this is different from the rest.
So, let's see a demonstration of a basic example, and then we will also work upon an actual dataset also.
So, firstly let's begin with import, so import numpy as NP, and then I will import from S K learn dot preprocessing, in that I will also take a standard scaler. S T A N D A R D S C A L E R.
After this, we will do from SKlearn dot pipeline, because we have to do multiple executions, therefore I will use make pipeline.
Ok, then lastly we will also do from S K learn dot SVM import SVC.
We will import SVC also.
So, this way we have imported all the libraries that are going to be used ahead.
So, firstly we will just look at the demo, so I will create sample data.
So ,for that we will have both X and Y.
So, we will pass X is equal to the NP dot array function.
Then we will pass a list in the NP dot array.
In this list, the element that we are going to pass will be again a list element, so one more list element we have, and in this we will pass minus 1 comma minus 1, so this is our first element.
So, to make you understand easily, I will put an enter, like this, so that you can view it easily.
So , our first element is minus 1 comma minus 1 and after putting a comma we will put a second element that is minus 2 comma minus 1, third element is 1 comma 1, and one more element we will put 2 comma 1, so this is our last element.
So, similarly for Y we will create an array, so Y is equal to NP dot array, and here there is 1, 2, 3, 4, this is having 4 elements, I will just simply pass one list, over here I will pass 1 comma 1 and 2 comma 2.
So, here you can understand that for minus1 and minus 1, its output is 1, and for minus 2 comma minus1, its output is 1, and for 1,comma 1, its output is 2, and for 2 comma 1, its output is 2.
From this we can assume a notion, that those with positive its output is 2 and those with negative its output comes out as 1.
Now, moving ahead, from the package of SK learn SVM, I have imported SVC.
So, we will create an object for this.
So, for classifiers we generally write as CLF.
So, CLF is equal to, I will use make pipeline, M A K E make pipeline. in that I will make two objects, one is standard scaler so that the data is scaled down and second I will make of SVC, and in that, here gamma is by default scale over here ,so we will keep it as auto G A M M A (gamma) is equal to A U T O (auto),
So, this is the way I am making an object.
Where, with the help of 'make pipeline', my object will be formed, so whatever data that will come, first it will undergo scaling then SVC will be applied.
Otherwise we would have to do it individually first scaling and then apply SVC, we can do that way also, but this has formed an output and will execute one after the other.
Now, we will move ahead and train this,
CLF dot fit, so we can train this fit method, I have both X and Y, so after passing them I am training it.
So, this is trained & after doing that we can get a prediction.
So with the CLF dot predict method, we can do prediction, and here in one list you pass values, so what kind of input do we have?, our input is in such a way that in a list there should be one more list, and in that list there should be some data, so suppose if I put minus 0.7 and comma minus1.
So, the data is in negative, and for negative the output was one, so I am expecting one to come here also.
So, you can see we are getting one, and after this I am giving a confusing value one positive and the other negative, so it has given us 2 in output.
So, this you saw how we implemented SVM.
Now in this SVM, we wish to put some data.
From SK learn dot dataset import, and here we will import load iris, so we are very much used to this dataset, if you are practising then you can use more data sets other than this.
So, here I have loaded the iris data.
And here I have formed X and Y is equal to load iris and in load iris, I wrote return, X and Y is equal to True.
So, this way our data is structured in X and y.
This we have already seen in earlier programmes as well that we can take X and Y directly like this.
Now, I am creating CLF1, again I am using make pipeline, and again I will make an object for standard scaler like earlier times, and then the output of this standard scaler will go in SVC, and in SVC I will put gamma's value as auto. A U T O
So, this is our CLF1 created.
So, we will do CLF1 dot fit now, so at the time of fitting we will pass our entire data in this, so X comma Y. We passed X Y.
Now, this data is trained.
Now, we will get the prediction through CLF1 dot predict, so here to put the values of X , so, how many rows we can put let's just see once.
So, we will use the X dot I Loc function, so here we want the first two that are 0 and 1.
Ok! So this is a Numpy array but I was treating it as pandas, that is why this error.
So, here I can put colon 2, It will give me the output in this way you can see. It has given the first two records in such a way.
So, here I will pass this, so, this is a numpy array, I was thinking that it is a data frame which this is not.
Now, I have treated them as an array, so it is giving zero, zero as output for both.
Now, I know that if we pass from 50 minus 100, for instance 50, 53 then their category should come as 1.
So, you can see it is giving us 1.
And, those categories who are above 100 like 130, 133 their values should come as 2.
So, you can see their value has come as 2.
So, in this way it is providing us the output of iris classes as output.
Ok! So execute this with a new dataset also and practice.
So friends let's conclude here for today, we will stop today's session here only.
And its further parts we will cover in the next session.
Till then keep learning and remain motivated.
If you have any queries or comments, click the discussion button below the video and post there. This way, you will be able to connect to fellow learners and discuss the course. Also, Our Team will try to solve your query.
good learning but the content titles are jumbled up, like first title of this module is decision tree dichotomiser which is practical part ahead of theory part. Same with the SVM practical 1 title has
Isakki Alias Devi P
yes, i am happy to learning for machine learning in LearnVern.it i s easily understanding for Beginners.
Superb and amazing 😍🤩 enjoyable experience.
Muhammad Nazam Maqbool
Absolutely good course... will suggest it to everyone. has superb content that is covered in a fantastic way.
super course and easily understanding and Good explaned
Ruturaj Nivas Patil
Very well explained in entire course. Great course for everyone as it takes from scratch to advance level.