A Statistical Evaluation of Atmosphere-Ocean General Circulation Models: Complexity vs. Simplicity
The principal tools used to model future climate change are General Circulation Models which are deterministic high resolution bottom-up models of the global atmosphere-ocean system that require large amounts of supercomputer time to generate results. But are these models a cost-effective way of predicting future climate change at the global level? In this paper we use modern econometric techniques to evaluate the statistical adequacy of three general circulation models (GCMs) by testing three aspects of a GCM's ability to reconstruct the historical record for global surface temperature: (1) how well the GCMs track observed temperature; (2) are the residuals from GCM simulations random (white noise) or are they systematic (red noise or a stochastic trend); (3) what is the explanatory power of the GCMs compared to a simple alternative time series model, which assumes that temperature is a linear function of radiative forcing. The results indicate that three of the eight experiments considered fail to reconstruct temperature accurately; the GCM errors are either red noise processes or contain a systematic error, and the radiative forcing variable used to simulate the GCM's have considerable explanatory power relative to GCM simulations of global temperature. The GFDL model is superior to the other models considered. Three out of four Hadley Centre experiments also pass all the tests but show a poorer goodness of fit. The Max Planck model appears to perform poorly relative to the other two models. It does appear that there is a trade-off between the greater spatial detail and number of variables provided by the GCMs and more accurate predictions generated by simple time series models. This is similar to the debate in economics regarding the forecasting accuracy of large macro-economic models versus simple time series models.