Bagging
Bagging is an ensemble method.
Bagging Reduces Variance¶
Remember the Bias-Variance decomposition:
Our goal is to reduce the variance term: . For this, we want .
Weak law of large numbers¶
The weak law of large numbers says: for i.i.d. random variables with mean , we have
Apply this to classifiers: Assume we have m training sets drawn from . Train a classifier on each one and average result:
We refer to this average of multiple classifiers as an ensemble of classifiers. Good news: If the variance component of the error must also vanish, i.e. . However, the problem is that we don’t have data sets . We only have a single .
Solution: Bagging (Bootstrap Aggregating)¶
We need to sample a total dataset , each of size .
To do this, simulate drawing from by drawing uniformly with replacement from the set .
Mathematically, let be a probability distribution that picks a training sample from uniformly at random. i.e. with . We sample the set , with , so is picked with replacement from distribution .
Notice we cannot use the W.L.L.N here: because these data points are not drawn i.i.d. from the original distribution , where comes from. Even though they are drawn i.i.d from the distribution (which is conditioned on .) However, in practice bagging still reduces variance very effectively.
Analysis¶
Although we cannot prove that the new samples are i.i.d., we can show that they are drawn from the original distribution , namely . A proof can be found at the original lecture notes.
Bagging summarized¶
- Sample data sets from with replacement.
- For each train a classifier
- The final classifier is .
In practice larger results in a better ensemble, but at some point you will obtain diminishing returns. Note that setting unnecessarily high will only slow down your classifier but will not increase the error of your classifier.
Advantages of Bagging¶
Easy to implement
Reduces variance, so has a strong beneficial effect on high variance classifiers.
We can obtain a mean score and variance as the prediction is an average of many classifiers. Variance can be interpreted as the uncertainty of the prediction. Especially in regression tasks, such uncertainties are otherwise hard to obtain. For example, imagine the prediction of a house price is $300,000. If a buyer wants to decide how much to offer, it would be very valuable to know if this prediction has standard deviation +-$10,000 or +-$50,000.
Bagging provides an unbiased estimate of the test error- the out-of-bag error. The idea is that for each training point, there are very likely to be some data sets that did not pick this point. If we average the classifiers of all such data sets, we obtain a classifier that was not trained on ever. To these classifiers, this point is equivalent to a test sample. If we compute the error of all these points, we obtain an estimate of the true test error.
More formally, for each training point let - in other words - the set of all the training sets that do not contain . Let the averaged classifier over all these data sets be
The-of-bag error is the average error/loss that all these classifiers yield
This is an estimate of the test error, because for each training point we used the subset of classifiers that never saw that training point during training. if is sufficiently large, the fact that we take out some classifiers has no significant effect and the estimate is pretty reliable.
Random Forest¶
A Random Forest is essentially bagged decision trees, with a slightly modified splitting criteria - don’t use all features.
The algorithm works as follows:
- Sample data sets from with replacement.
- For each train a full decision tree (max-depth=) with one small modification: before each split randomly subsample features (without replacement) and only consider these features for your split. This further increases the variance of the trees. A good choice for is
- The final classifier is .
The randomness in random forest comes from the random sampling in step 1 and the random feature selection in step 2.
The Random Forest is one of the best and easiest to use model. There are two reasons:
- It only has two hyper-parameters, and and is extremely insensitive to both of these. You can set as large as you can afford and a popular choice of is
- Decision trees do not require a lot of preprocessing, so the features can be of different scale, magnitude, or slope. This is highly advantageous with heterogeneous data, for example with features like blood pressure, age, gender, ... each of which is recorded in completely different units.
Useful variants of Random Forests:
- Split each training set into two partitions . Build the tree on and estimate the leaf labels on . You must stop splitting if a leaf has only a single point in in it. This has the advantage that each tree and also the RF classifier become consistent.
- Do not grow each tree to its full depth, instead prune based on the leave out samples. This can further improve your bias/variance trade-off. (An easy way to prune decision trees is to start from the bottom and then to remove splits and check to see if it affects our error too much. If it doesn’t we remove that split.)