Tensor product space ANOVA models in multivariate function estimation
To deal with the curse of dimensionality in high dimensional nonparametric problems, we consider using tensor product space ANOVA models, which extend the popular additive models and are able to capture interactions of any order. The multivariate function is given an ANOVA decomposition, i.e, it is expressed as a construct plus the sum of functions of one variable (main effects), plus the sum of functions of two variables (two-factor interactions), and so on. We assume the component functions to be in a tensor product space. The main result is that, in a variety of general nonparametric problems (including regression, generalized regression, density estimation, hazard regression, and white noise), under general conditions, the rate of convergence for the penalized likelihood estimator in the TPS-ANOVA model is $O(\lbrack n(\log\ n)\sp{1-r}\rbrack\sp{-{2m\over2m+1}})$ when the smoothing parameter is appropriately chosen. Here r is the highest order of interactions considered in the model, m is a measure of the smoothness of the unknown multivariate function, and n is the sample size. Notice this rate is very close to the optimal rate for one dimensional nonparametric models. This means that the optimal rate for the tensor product space ANOVA models is very close to the optimal rate for one dimensional models. Thus in a sense, curse of dimensionality is overcome by the tensor product space ANOVA models. In the white noise context, the optimal rate for the TPS-ANOVA model is shown to be $\lbrack n(\log\ n)\sp{1-r}\rbrack\sp{-{2m\over2m+1}}.$
Year of publication: |
1998-01-01
|
---|---|
Authors: | Lin, Yi |
Publisher: |
ScholarlyCommons |
Saved in:
freely available
Saved in favorites
Similar items by person
-
Grey game theory and its applications in economic decision-making
Fang, Zhigeng, (2010)
-
Random forests and adaptive nearest neighbors
Forrest, Jeffrey Yi-Lin, (2006)
-
Hsiao, Yu-cheng, (2010)
- More ...