An empirical bias--variance analysis of DECORATE ensemble method at different training sample sizes
DECORATE (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples) is a classifier combination technique to construct a set of diverse base classifiers using additional artificially generated training instances. The predictions from the base classifiers are then integrated into one by the mean combination rule. In order to gain more insight about its effectiveness and advantages, this paper utilizes a large experiment to study the bias--variance analysis of DECORATE as well as some other widely used ensemble methods (such as bagging, AdaBoost, random forest) at different training sample sizes. The experimental results yield the following conclusions. For small training sets, DECORATE has a dominant advantage over its rivals and its success is attributed to the larger bias reduction achieved by it than the other algorithms. With increase in training data, AdaBoost benefits most and the bias reduced by it gradually turns to be significant while its variance reduction is also medium. Thus, AdaBoost performs best with large training samples. Moreover, random forest behaves always second best regardless of small or large training sets and it is seen to mainly decrease variance while maintaining low bias. Bagging seems to be an intermediate one since it reduces variance primarily.
Year of publication: |
2012
|
---|---|
Authors: | Zhang, Chun-Xia ; Wang, Guan-Wei ; Zhang, Jiang-She |
Published in: |
Journal of Applied Statistics. - Taylor & Francis Journals, ISSN 0266-4763. - Vol. 39.2012, 4, p. 829-850
|
Publisher: |
Taylor & Francis Journals |
Saved in:
Saved in favorites
Similar items by person
-
RandGA: injecting randomness into parallel genetic algorithm for variable selection
Zhang, Chun-Xia, (2015)
-
A local boosting algorithm for solving classification problems
Zhang, Chun-Xia, (2008)
-
Using Boosting to prune Double-Bagging ensembles
Zhang, Chun-Xia, (2009)
- More ...