A Framework for Model Validation
Computational models have the potential of being used to make credible predictions in place of physical testing in many contexts, but success and acceptance require a convincing model validation. In general, model validation is understood to be a comparison of model predictions to experimental results but there appears to be no standard framework for conducting this comparison. This paper gives a statistical framework for the problem of model validation that is quite analogous to calibration, with the basic goal being to design and analyze a set of experiments to obtain information pertaining to the `limits of error' that can be associated with model predictions. Implementation, though, in the context of complex, high-dimensioned models, poses a considerable challenge for the development of appropriate statistical methods and for the interaction of statisticians with model developers and experimentalists. The proposed framework provides a vehicle for communication between modelers, experimentalists, and the analysts and decision-makers who must rely on model predictions.
Year of publication: |
2009-11-04
|
---|---|
Authors: | Easterling, R.G. |
Subject: | mathematics, computers, information science, management, law, miscellaneous | Mathematical Models | Validation | Statistical Models | Experiment Planning |
Saved in:
freely available
Saved in favorites
Similar items by subject
-
Statistical validation of physical system models
Paez, T.L., (2009)
-
Statistical validation of system models
Barney, P., (2009)
-
Accrediting models for the TMD COEA: A case study in face assessment
Bravy, S., (2009)
- More ...