Principal Component Analysis is an unsupervised learning approach used in machine learning to reduce dimensionality. With the help of orthogonal transformation, it is a statistical technique that turns observations of correlated features into a set of linearly uncorrelated data.
PCA is one of the most widely used unsupervised machine learning techniques across a wide range of applications, including exploratory data analysis, dimensionality reduction, information compression, data de-noising, and more!
PCA is an unsupervised statistical technique for reducing the size of a dataset's dimensions. When operating on a larger input dataset, ML models with many input variables or higher dimensionality are more likely to fail. PCA aids in discovering and coupling correlations between several variables.
PCA can be utilised in supervised learning tasks like classification and regression in an indirect way. When you have a large number of features, utilising a feature reduction method like PCA is one way to reduce the amount of features and avoid overfitting.
While using PCA on discrete data or categorical variables that have been hot encoded is technically conceivable, it is not recommended. Simply put, do not apply PCA to variables that do not belong on a coordinate plane.