Forecast evaluation of small nested model sets
We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require re-estimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, while the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic and White's (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have the most accurate size, and the procedure that looks at the maximum t-statistic has the best power. We illustrate our procedures by comparing forecasts of different models for US inflation. Copyright © 2010 John Wiley & Sons, Ltd.
Year of publication: |
2010
|
---|---|
Authors: | Hubrich, Kirstin ; West, Kenneth D. |
Published in: |
Journal of Applied Econometrics. - John Wiley & Sons, Ltd.. - Vol. 25.2010, 4, p. 574-594
|
Publisher: |
John Wiley & Sons, Ltd. |
Saved in:
Online Resource
Saved in favorites
Similar items by person
-
Forecast evaluation of small nested model sets
Hubrich, Kirstin, (2009)
-
Forecast Evaluation of Small Nested Model Sets
Hubrich, Kirstin, (2008)
-
Forecast evaluation of small nested model sets
Hubrich, Kirstin, (2009)
- More ...