Can out-of-sample forecast comparisons help prevent overfitting?
This paper shows that out-of-sample forecast comparisons can help prevent data mining-induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data-based design similar to those used in some previous studies. In each simulation, a general-to-specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post-sample tests are roughly correctly sized. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modelling quarterly US inflation. Copyright © 2004 John Wiley & Sons, Ltd.
Year of publication: |
2004
|
---|---|
Authors: | Clark, Todd E. |
Published in: |
Journal of Forecasting. - John Wiley & Sons, Ltd.. - Vol. 23.2004, 2, p. 115-139
|
Publisher: |
John Wiley & Sons, Ltd. |
Saved in:
Saved in favorites
Similar items by person
-
Finite-sample properties of tests for equal forecast accuracy
Clark, Todd E., (1999)
-
Forecasting an aggregate of cointegrated disaggregates
Clark, Todd E., (2000)
-
The responses of prices at different stages of production to monetary policy shocks
Clark, Todd E., (1999)
- More ...