Showing 1 - 6 of 6
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate...
Persistent link: https://www.econbiz.de/10014142855
Persistent link: https://www.econbiz.de/10012000860
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate...
Persistent link: https://www.econbiz.de/10009438376
The classical binary classification problem is investigated when it is known in advance that the posterior probability function (or regression function) belongs to some class of functions. We introduce and analyze a method which effectively exploits this knowledge. The method is based on...
Persistent link: https://www.econbiz.de/10005572603
We obtain minimax lower and upper bounds for the expected distortion redundancy of empirically designed vector quantizers. We show that the mean squared distortion of a vector quantizer designed from $n$ i.i.d. data points using any design algorithm is at least $\Omega (n^{-1/2})$ away from the...
Persistent link: https://www.econbiz.de/10005772321
Minimax lower bounds for concept learning state, for example, that for each sample size $n$ and learning rule $g_n$, there exists a distribution of the observation $X$ and a concept $C$ to be learnt such that the expected error of $g_n$ is at least a constant times $V/n$, where $V$ is the VC...
Persistent link: https://www.econbiz.de/10005772365