You are welcome to the course of Machine Learning, and this tutorial is in continuation of our previous tutorial., So let's see ahead.
In today's tutorial we will see hierarchical clustering, as to How we can perform clustering through forming a hierarchy,
So, in this we have two techniques as we had seen,
One is Agglomerative, so what happens in Agglomerative is that… seemingly we have data points like A, B and C, so from start we will have separate clusters,
Then we see that A and B look like one cluster, and C looks to be a part of some other cluster, so in this way we start making the groups to form clusters by combining two or three.
So, this is the direction in which Agglomerative works,
Now, the second is
In this in the start, we consider all the datasets as one cluster, so what happens in this that your A B and C is one cluster,
So, here our approach will be to find the differences, so it seems, A and B look the same and C looks a little different, so both are opposite approaches.
So, let's see Agglomerative,
So from sklearn dot cluster, last time we had performed k means from clusters, so this time we will import Agglomerative here.
" So cannot import name Agglomerative",
Yes! Because this is Agglomerative Clustering.
So, we took this, and along with that we will import
Numpy as np, so this also we will import,
And, let's create sample data now, as we had done earlier also.
Here, x is equal to np dot.. array, and pass a list, and in this list we will create a data 1 comma 2, second record is 1 comma 4, third record is 1 comma 0, so 1 comma 0, next record is 4 comma 2, next is 4 comma 4, and in the last we will end with 4 comma 0,
So, my data is formed.
Now, we will apply Agglomerative Clustering on this,
So, here clustering is equal to Agglomerative Clustering, so I created a model for this.
So, now we will train this clustering model with the help of fit method, so with fit we will train this and pass x.
Now, in the next step What can we do?
So, here… clustering dot, here we can predict.. so here it has already divided this in clusters, so I don't need to perform this.
So, instead I can just see the labels on the basis it has divided,
it has given as 1,1,1,0,0,0
So, it has given the first ones as 1,1 and the last one, it has given them as 0,0
1,2 1,4 1,0.
So, here the magnitude is less in 1,1 and in the below records the magnitude is high, so it seems logical as how it has shown us the clustering here.
So, this is Agglomerative Clustering, and you practice this by implementing on real time datasets, and then tell us by writing on the forum that on which dataset you performed this.
So, friends, let's conclude here for today. We will stop this session here, and it's further parts we will cover in the next session.
So, keep learning and remain motivated.
If you have any queries or comments, click the discussion button below the video and post there. This way, you will be able to connect to fellow learners and discuss the course. Also, Our Team will try to solve your query.
good learning but the content titles are jumbled up, like first title of this module is decision tree dichotomiser which is practical part ahead of theory part. Same with the SVM practical 1 title has
Isakki Alias Devi P
yes, i am happy to learning for machine learning in LearnVern.it i s easily understanding for Beginners.
Superb and amazing 😍🤩 enjoyable experience.
Muhammad Nazam Maqbool
Absolutely good course... will suggest it to everyone. has superb content that is covered in a fantastic way.
super course and easily understanding and Good explaned
Ruturaj Nivas Patil
Very well explained in entire course. Great course for everyone as it takes from scratch to advance level.