language-icon Old Web
English
Sign In

Random forest

Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set.:587–588Assume that there exist sequences ( a n ) , ( b n ) {displaystyle (a_{n}),(b_{n})} such that, almost surely,Assume that there exist sequences ( ε n ) , ( a n ) , ( b n ) {displaystyle (varepsilon _{n}),(a_{n}),(b_{n})} such that, almost surely Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set.:587–588 The first algorithm for random decision forests was created by Tin Kam Ho using the random subspace method, which, in Ho's formulation, is a way to implement the 'stochastic discrimination' approach to classification proposed by Eugene Kleinberg. An extension of the algorithm was developed by Leo Breiman and Adele Cutler, who registered 'Random Forests' as a trademark (as of 2019, owned by Minitab, Inc.). The extension combines Breiman's 'bagging' idea and random selection of features, introduced first by Ho and later independently by Amit and Geman in order to construct a collection of decision trees with controlled variance. The general method of random decision forests was first proposed by Ho in 1995. Ho established that forests of trees splitting with oblique hyperplanes can gain accuracy as they grow without suffering from overtraining, as long as the forests are randomly restricted to be sensitive to only selected feature dimensions. A subsequent work along the same lines concluded that other splitting methods, as long as they are randomly forced to be insensitive to some feature dimensions, behave similarly. Note that this observation of a more complex classifier (a larger forest) getting more accurate nearly monotonically is in sharp contrast to the common belief that the complexity of a classifier can only grow to a certain level of accuracy before being hurt by overfitting. The explanation of the forest method's resistance to overtraining can be found in Kleinberg's theory of stochastic discrimination. The early development of Breiman's notion of random forests was influenced by the work of Amit andGeman who introduced the idea of searching over a random subset of theavailable decisions when splitting a node, in the context of growing a singletree. The idea of random subspace selection from Ho was also influential in the design of random forests. In this method a forest of trees is grown,and variation among the trees is introduced by projecting the training datainto a randomly chosen subspace before fitting each tree or each node. Finally, the idea ofrandomized node optimization, where the decision at each node is selected by arandomized procedure, rather than a deterministic optimization was firstintroduced by Dietterich. The introduction of random forests proper was first made in a paperby Leo Breiman. This paper describes a method of building a forest ofuncorrelated trees using a CART like procedure, combined with randomized nodeoptimization and bagging. In addition, this paper combines severalingredients, some previously known and some novel, which form the basis of themodern practice of random forests, in particular: The report also offers the first theoretical result for random forests in theform of a bound on the generalization error which depends on the strength of thetrees in the forest and their correlation. Decision trees are a popular method for various machine learning tasks. Tree learning 'come closest to meeting the requirements for serving as an off-the-shelf procedure for data mining', say Hastie et al., 'because it is invariant under scaling and various other transformations of feature values, is robust to inclusion of irrelevant features, and produces inspectable models. However, they are seldom accurate'.:352 In particular, trees that are grown very deep tend to learn highly irregular patterns: they overfit their training sets, i.e. have low bias, but very high variance. Random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of reducing the variance.:587–588 This comes at the expense of a small increase in the bias and some loss of interpretability, but generally greatly boosts the performance in the final model.

[ "Machine learning", "Data mining", "Artificial intelligence", "Pattern recognition", "regression forest", "extreme gradient boosting", "rotation forest", "Platt scaling", "gradient boosting decision tree" ]
Parent Topic
Child Topic
    No Parent Topic