Use of Bad Training Data for Better Predictions
We show how randomly scrambling the output of classes of various fractions of the training data may be used to improve predictive accuracy of a classification algorithm. We present a method for calculating the ``noise sensitivity signature'' of a learning algorithm which is based on scrambling the output classes. This signature can be used to indicate a good match between the complexity of the classifier and the complexity of the data. Use of noise sensitivity signatures is distinctly different from other schemes to avoid overtraining, such as cross-validation, which uses only part of the training data, or various penalty functions, which are not data-adaptive. Noise sensitivity signature methods use all of the training data and are manifestly data-adaptive and non-parametric. They are well suited for situations with limited training data.
Year of publication: |
1995-02
|
---|---|
Authors: | Grossman, Tal ; Lapedes, Alan |
Institutions: | Santa Fe Institute |
Saved in:
Saved in favorites
Similar items by person
-
Neural Net Representations of Empirical Protein Potentials
Grossman, Tal, (1996)
-
Noise Sensitivity Signatures for Model Selection
Grossman, Tal, (1995)
-
Off-Training-Set Error for the Gibbs and the Bayes Optimal Generalizers
Grossman, Tal, (1995)
- More ...