Identifying predictive hubs to condense the training set of <InlineEquation ID="IEq1"> <EquationSource Format="TEX">$$k$$</EquationSource> </InlineEquation>-nearest neighbour classifiers
The <InlineEquation ID="IEq3"> <EquationSource Format="TEX">$$k$$</EquationSource> </InlineEquation>-Nearest Neighbour classifier is widely used and popular due to its inherent simplicity and the avoidance of model assumptions. Although the approach has been shown to yield a near-optimal classification performance for an infinite number of samples, a selection of the most decisive data points can improve the classification accuracy considerably in real settings with a limited number of samples. At the same time, a selection of a subset of representative training samples reduces the required amount of storage and computational resources. We devised a new approach that selects a representative training subset on the basis of an evolutionary optimization procedure. This method chooses those training samples that have a strong influence on the correct prediction of other training samples, in particular those that have uncertain labels. The performance of the algorithm is evaluated on different data sets. Additionally, we provide graphical examples of the selection procedure. Copyright Springer-Verlag Berlin Heidelberg 2014
Year of publication: |
2014
|
---|---|
Authors: | Lausser, Ludwig ; Müssel, Christoph ; Melkozerov, Alexander ; Kestler, Hans |
Published in: |
Computational Statistics. - Springer. - Vol. 29.2014, 1, p. 81-95
|
Publisher: |
Springer |
Saved in:
Online Resource
Saved in favorites
Similar items by person
-
Multi-Objective Parameter Selection for Classifiers
Müssel, Christoph, (2012)
-
On the fusion of threshold classifiers for categorization and dimensionality reduction
Kestler, Hans, (2011)
-
Multi-objective selection for collecting cluster alternatives
Kraus, Johann, (2011)
- More ...