Tensor product space ANOVA models in multivariate function estimation
To deal with the curse of dimensionality in high dimensional nonparametric problems, we consider using tensor product space ANOVA models, which extend the popular additive models and are able to capture interactions of any order. The multivariate function is given an ANOVA decomposition, i.e, it is expressed as a construct plus the sum of functions of one variable (main effects), plus the sum of functions of two variables (twofactor interactions), and so on. We assume the component functions to be in a tensor product space. The main result is that, in a variety of general nonparametric problems (including regression, generalized regression, density estimation, hazard regression, and white noise), under general conditions, the rate of convergence for the penalized likelihood estimator in the TPSANOVA model is $O(\lbrack n(\log\ n)\sp{1r}\rbrack\sp{{2m\over2m+1}})$ when the smoothing parameter is appropriately chosen. Here r is the highest order of interactions considered in the model, m is a measure of the smoothness of the unknown multivariate function, and n is the sample size. Notice this rate is very close to the optimal rate for one dimensional nonparametric models. This means that the optimal rate for the tensor product space ANOVA models is very close to the optimal rate for one dimensional models. Thus in a sense, curse of dimensionality is overcome by the tensor product space ANOVA models. In the white noise context, the optimal rate for the TPSANOVA model is shown to be $\lbrack n(\log\ n)\sp{1r}\rbrack\sp{{2m\over2m+1}}.$
Year of publication: 
19980101


Authors:  Lin, Yi 
Publisher: 
ScholarlyCommons 
Saved in favorites
Similar items by person

Efficient Empirical Bayes Variable Selection and Estimation in Linear Models
Yuan, Ming, (2005)

Racetrack betting and consensus of subjective probabilities
Brown, Lawrence D., (2003)

On the nonnegative garrotte estimator
Yuan, Ming, (2007)
 More ...