Calibration tests for count data
Calibration, the statistical consistency of forecast distributions and observations, is a central requirement for probabilistic predictions. Calibration of continuous forecasts has been widely discussed, and significance tests are commonly used to detect whether a prediction model is miscalibrated. However, calibration tests for discrete forecasts are rare, especially for distributions with unlimited support. In this paper, we propose two types of calibration tests for count data: tests based on conditional exceedance probabilities and tests based on proper scoring rules. For the latter, three scoring rules are considered: the ranked probability score, the logarithmic score and the Dawid-Sebastiani score. Simulation studies show that all the different tests have good control of the type I error rate and sufficient power under miscalibration. As an illustration, we apply the methodology to weekly data on meningoccocal disease incidence in Germany, 2001–2006. The results show that the test approach is powerful in detecting miscalibrated forecasts. Copyright Sociedad de Estadística e Investigación Operativa 2014
Year of publication: |
2014
|
---|---|
Authors: | Wei, Wei ; Held, Leonhard |
Published in: |
TEST: An Official Journal of the Spanish Society of Statistics and Operations Research. - Springer. - Vol. 23.2014, 4, p. 787-805
|
Publisher: |
Springer |
Subject: | Calibration test | Count data | Predictive distribution | Proper scoring rules | 62M20 Prediction |
Saved in:
Saved in favorites
Similar items by subject
-
Gil, Kyehwan, (2015)
-
Permanent breaks and temporary shocks in a time series
Lee, Yoonsuk, (2017)
-
Permanent shocks and forecasting with moving averages
Lee, Yoonsuk, (2017)
- More ...
Similar items by person