You are welcome to the course of Machine Learning.
And this tutorial is in continuation of our previous session.
And, for today's tutorial we are looking at Support Vector Regressor,
We previously saw about SVM, when we were studying about the classification problem, there we had studied about Support vector machine, in which we saw about classifier,
So, a classifier is used for specific categories, such as if you have two categories and three categories, it works for that.
And Regressor works for continuous values.
So, let's implement and see as to How it works?
So, we will import libraries from sklearn dot svm, so now we know that in SVM we have both SVR and SVC also, so 'import SVR', support vector regressor, along with that; from sklearn dot pipeline, we will import make pipeline because we are going to use standard scaler make underscore pipeline, so we will use this also,
Next, after this from sklearn dot preprocessing, so we will import standard scaler from sklearn dot preprocessing, so preprocessing p r o c e s s I n g, import s t a n d a r d standard scaler, so we will import this also.
Lastly we will see a small example for that we will import numpy,
So, import numpy as NP.
Alright, so we have imported these many libraries.
Now, here we will randomly generate some numbers, so we will generate N number of samples, and there itself we will take N number of features, ok!
So, here suppose we have 10 samples and 5 features.
Now, with the help of numpy we will generate random numbers.
So here, let's say data is equal to np dot random… dot, so first we will set random state,
So…..(typing pause).. we will just take here as R is equal to, and here we will set a random state, as zero. Ok!
Now, using this r we will generate data.
So, how will we generate the data?
So, we will generate it in this way,
Y is equal to R dot here we will have random, so we will use it through N so we will use randn, and here we will pass N samples, because n samples is 10, so we will pass N samples, so here y is generated.
Now here, how will we generate X?
In this we will pass both N samples and N features, Right!
So, R dot again R A N D randn, and here N samples, along with that N features, so we will pass both of them, See this!
So, we will cross check once what is there in our x!
So, see this, we have this data in our x.
So, this will be 1, 2, 3, 4, this will be 10, and this will be 5, because we had given 5.
Similarly, if I see,
What is there in y?
So, we will have.., y should be small,
Here we have 10 values, this is the output.
So, in this way our sample data is created.
So, whenever you want to practice and you don't have the dataset, then you can make a dataset in this way.
Now, let's move ahead,
Now, we want to create an SVR.
So, here I will create m o d e l (model) is equal to, this model is of regression, so model, is equal to, here we will use make pipeline, because inside make pipeline, first we will have to send it to standard scaler, after getting scaled at standard scaler, it will go to SVR.
After going to SVR, then we have some parameters, that you can see like kernel, degree, gamma, coefficient, so here we have different types of parameters, which if you want to tweak, then you can do that, or if you want them to remain same, then also you can do that, so for now I will keep them same.
So, in this way I have created the model,
Now, after creating the model, with the function of m o d e l model dot fit, with this we will give it the training, and we will pass X and y inside it.
So, in this way our model will be trained.
After getting trained, the next step in this is model. dot. predict.
So, our next step is to make predictions, so this time I will pass the entire x.
And, here I will display y and show you,
So, this is your y and this is your prediction, Ok!
So, can there be any difference?
Yes absolutely! You can see that from here, so it's accuracy will be very less but this is how SVR works.
So, if you want to implement SvR, then this is the way.
So, in this way you can organise and use other dataset and implement them by formatting them in X and y.
So, friends, let's conclude here for today. We will end this session here, and it's upcoming parts we will cover in the next session.
So keep learning and remain motivated.
If you have any queries or comments, click the discussion button below the video and post there. This way, you will be able to connect to fellow learners and discuss the course. Also, Our Team will try to solve your query.
good learning but the content titles are jumbled up, like first title of this module is decision tree dichotomiser which is practical part ahead of theory part. Same with the SVM practical 1 title has
Isakki Alias Devi P
yes, i am happy to learning for machine learning in LearnVern.it i s easily understanding for Beginners.
Superb and amazing 😍🤩 enjoyable experience.
Muhammad Nazam Maqbool
Absolutely good course... will suggest it to everyone. has superb content that is covered in a fantastic way.
super course and easily understanding and Good explaned
Ruturaj Nivas Patil
Very well explained in entire course. Great course for everyone as it takes from scratch to advance level.