9.3 Bagging

Bootstrap aggregation, or bagging, is a general-purpose procedure for reducing the variance of a statistical learning method. The algorithm constructs B regression trees using B bootstrapped training sets, and averages the resulting predictions. These trees are grown deep, and are not pruned. Hence each individual tree has high variance, but low bias. Averaging these B trees reduces the variance. For classification trees, bagging takes the “majority vote” for the prediction. Use a value of B sufficiently large that the error has settled down.

To test the model accuracy, the out-of-bag observations are predicted from the models. E.g., If B/3 of observations are in-bag, there are B/3 predictions per observation. These predictions are averaged for the test prediction. Again, for classification trees, a majority vote is taken.

The downside to bagging is that it improves accuracy at the expense of interpretability. There is no longer a single tree to interpret, so it is unclear which variables are more important than others.

Bagged trees are a special case of random forests, so see the next section for an example.