Inference about predictive ability when there are many predictors
We enhance the theory of asymptotic inference about predictive ability by considering the case when a set of variables used to construct predictions is sizable. To this end, we consider an alternative asymptotic framework where the number of predictors tends to innity with the sample size, although more slowly. Depending on the situation the asymptotic normal distribution of an average prediction criterion either gains additional variance as in the few predictors case, or gains non-zero bias which has no analogs in the few predictors case. By properly modifying conventional test statistics it is possible to remove most size distortions when there are many predictors, and improve test sizes even when there are few of them.