Hello, I am (name) from learnvern. (6 seconds pause, music) In the continuation of the last tutorial on machine learning, we will proceed ahead in this tutorial. So let’s watch it. Now we are advancing to a new topic, a new module and its name is dimensionality reduction. While implementing the algorithms for machine learning you must have experienced that if I have four, two or three features and for those features I have given target variables, then it becomes very easy to apply machine learning on these two or three features. But if these features increase from 2 or 3 to 30 features, then what will you do? Then, does machine learning algorithm have some issues? Or the algorithm performs in the same way as it was working for two features. So here we have to understand that the more the features , the more will be the complexity, more will be space and challenges will increase for machine learning algorithms also. And here we try that we use dimensionality reduction techniques using which we can reduce 30 parameters to upto 3 or 4 or 5. So here we will learn how so many features can be reduced, meaning there are so many dimensions and by reducing them how can we bring them to less dimensions.So let’s get started. So I was telling you that in machine learning algorithms dimensionality reduction is a very important aspect because whatever variables are there, which are input variables are there which we call X, X 1, X 2 , X 3 , we reduce them , so what happens by reducing them, it becomes easy for machine learning algorithms and storage required also reduces. So let’s watch two concepts ahead, Feature Selection and Feature Extraction.
Now in this feature selection, selecting means that if I have 10 features then I will select 2 or 3 and these 2 and 3 will be such that they affect the output the most, meaning that they will be the most important features. So I will have to find out which are these 2 or 3 features that affect the most. The second is feature extraction. Feature extraction means that if the data is of two features then I can represent it in a 2 D graph , if it’s of three then I can make a 3 D graph but if it is of 4 then what will I do? So here if I talk about dimensions then multidimensional data means that there are many features, input features and to convert it or transform it in less number of dimensions, is called feature extraction. Now, proceeding ahead we will watch two techniques, one is PCA, Principal Component Analysis and the second one is LDA called Linear Discriminant Analysis. So we will understand them in detail. What is PCA, it’s a dimensionality reduction technique , it reduces the dimensions and we will understand it in detail in the next presentation and the next topic that we have in this module is LDA which is also called Normal Discriminant Analysis or Discriminant Functional Analysis also. So what this does is, it finds a linear combination of features, so this also we will understand what linear combination of features is and what it does is it separates two or more classes or identifies them distinctly, OK. So these are the two techniques that we are going to study. So, let’s go for the next tutorial that is PCA , thank you very much, keep watching.
If you have any queries or comments, click the discussion button below the video and post there. This way, you will be able to connect to fellow learners and discuss the course. Also, Our Team will try to solve your query.
Share a personalized message with your friends.