On the sample variance of linear statistics derived from mixing sequences
In a sample X1,...,XN, independently and identically distributed with distribution F, a linear statistic can be defined, where Ti=ø(Xi), and ø(·) is some function. For this statistics, a 'natural' nonparametric variance estimator is the sample variance , the denominator N-1 often being used instead of N. However, if the sample is stationary but weakly dependent, the same estimator would not work, since it fails to take into account the covariances among the Ti's. Moreover, in many time series problems, the objective is to estimate a parameter of the Mth dimensional marginal, and not just of the first-dimensional marginal distribution. Thus, the linear statistic in this case must be of the form T(X1,...,XN)=(1/(N-M+1))[summation operator]N-M+1i=1Ti, where Ti=øM(Xi,...,Xi+M-1), and øM(·) is now a function of a whole block of observations. In the present report, we formulate the nonparametric variance estimator corresponding to a sample variance of the linear statistic T(X1,...,XN). The proposed estimator depends on a design parameter b that tends to infinity as the sample size N increases. The optimal rate at which b should tend to infinity is found that minimizes the asymptotic order of the Mean Squared Error in estimation. Special emphasis is given to the case where M tends to infinity as well as N, in which case a general version of the linear statistic is introduced that estimates a parameter of the whole (infinite-dimensional) joint distribution of the sequence .
Year of publication: |
1993
|
---|---|
Authors: | Politis, Dimitris N. ; Romano, Joseph P. |
Published in: |
Stochastic Processes and their Applications. - Elsevier, ISSN 0304-4149. - Vol. 45.1993, 1, p. 155-167
|
Publisher: |
Elsevier |
Saved in:
Saved in favorites
Similar items by person
-
McMurry, Timothy L., (2012)
-
Theory and Methods - On Subsampling Estimators With Unknown Rate of Convergence
Bertail, Patrice, (1999)
-
Politis, Dimitris N., (1994)
- More ...