An Efficient Method to Estimate Bagging's Generalization Error
In bagging [Bre94a] one uses bootstrap replicates of the training set [Efr79, ET93] to try to improve a learning algorithm's performance. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive; for leave-one-out cross-validation one needs to train the underlying algorithm on the order of $m\nu$ times, where m is the size of the training set and $\nu$ is the number of replicates. This paper presents several techniques for exploiting the bias-variance decomposition [GBD92, Wol96] to estimate the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm. The best of our estimators exploits stacking [Wol92]. In a set of experiments reported here, it was found to be more accurate than both the alternative cross-validation-based estimator of the bagged algorithm's error and the cross-validation-based estimator of the underlying algorithm's error. This improvement was particularly pronounced for small test sests. This suggests a novel justification for using bagging---improved estimation of generalization error. <p> Key words. machine learning, regression, bootstrap, bagging
Year of publication: |
1996-06
|
---|---|
Authors: | Wolpert, David H. ; Macready, William G. |
Institutions: | Santa Fe Institute |
Saved in:
Saved in favorites
Similar items by person
-
Self-Dissimilarity: An Empirical Measure of Complexity
Wolpert, David H., (1997)
-
On 2-Armed Gaussian Bandits and Optimization
Macready, William G., (1996)
-
No Free Lunch Theorems for Search
Wolpert, David H., (1995)
- More ...