Because you don't have labels to automatically calculate the accuracy/effectiveness of your model, splitting the dataset for unsupervised learning makes little sense. Checking the detected samples from your unsupervised model is one approach to get a feel of how well your model is working.
The data that will be input into the model will be stored in the train set.
The development set is used to check the trained model's accuracy.
The data on which we test the trained and validated model is called the test set.
The most straightforward technique to divide the modelling dataset into training and testing sets is to assign two-thirds of the data points to the former and one-third to the latter. As a result, we use the training set to train the model before applying it to the test set. We may assess our model's performance in this manner.
In machine learning, data splitting is widely used to divide data into train, test, and validation sets. We may use this method to discover the model hyper-parameter as well as estimate generalisation performance.
The major goal of dividing the dataset into a validation set is to avoid overfitting, which occurs when a model gets extremely good at categorising samples in the training set but is unable to generalise and make accurate classifications on data it has never seen before.