Decision tree uses a tree structure to develop regression or classification models. It incrementally divides a dataset into smaller and smaller sections while also developing an associated decision tree. The end output is a tree with decision and leaf nodes.
A regression tree is constructed using binary recursive partitioning, which is an iterative procedure that divides the data into partitions or branches and then divides each partition into smaller groups as the approach progresses up each branch.
The major distinction between classification and regression decision trees is that classification decision trees are formed using unordered values and dependent variables, whereas regression decision trees are built with ordered values and dependent variables. The regression decision trees accept both ordered and continuous values.
Simple to comprehend and interpret.
Little data preparation is required.
The cost of using the tree (predicting data) is proportional to the quantity of data points needed to train the tree.
Capable of dealing with both numerical and category data.
Capable of dealing with multi-output difficulties.
where is the finaldata.csv
great learning plateform kushal sir is really too good