LDA creates predictions by calculating the likelihood that a fresh set of inputs falls into each of the classes. The output class is the one with the highest probability, and a forecast is produced.
The goal of LDA is to find a feature subspace that maximises group separability. While principal component analysis is an unsupervised Dimensionality reduction technique, it does not take into account the class label. PCA focuses on capturing the data set's highest variation direction.
The likelihood that a fresh set of inputs belongs to each class is estimated using linear discriminant analysis. The output class with the highest probability is chosen. Here's how Bayes' theorem estimates the chance that the data belongs to each class if the output class is (k) and the input is (x).
Learner's Ratings
4.3
Overall Rating
69%
9%
13%
5%
4%
Reviews
J
Jagannath Mahato
5
Hello Kushal Sir!
Your way of teaching is very good. I thank you from my heart ❤️ that you are providing such good content for free.
M
Muhammad Qasim
5
Hi Kushal ! Your way of teaching is extremely helpful and you are one of the best teacher in the world.
Extremely helpful and I recommend to my peer as well for this course.
S
Shafi Akhtar
5
None
A
Aniket Kumar prasad
5
Very helpful and easy to understand all the concepts, best teacher for learning ML.
R
Rishu Shrivastav
5
explained everything in detail. I have a question learnvern provide dataset , and ppt ? or not?
V
VIKAS CHOUBEY
5
very nicely explained
V
Vrushali Kandesar
5
Awesome and very nicely explained!!!
One importing thing to notify to team is by mistakenly navie's practical has been added under svm lecture and vice versa (Learning Practical 1)
M
Mohd Mushraf
5
Amazing Teaching
J
Juboraj Juboraj
5
Easy to understand & explain details.
J
Joydeb
5
Awesome Course sir and your teaching style is very GOOD.
Share a personalized message with your friends.