Coherence and calibration in expert probability judgement
Many decision-aiding technologies require valid probability judgements to be elicited from domain experts. But how valid are experts' probability judgements? It is of considerable practical importance to identify the conditions which affect the quality of these judgments. Descriptive psychological models permit the identification of situations in which judgement is likely to be poor, and suggest methods by which judgement may be improved. We describe two approaches to the assessment of the quality of probability judgement--calibration and coherence--and review research findings following from these two approaches. This review is carried out within a framework of the psychological processes required to make a probability judgement. Three rival psychological models of probability judgement are located within this framework, and are evaluated in the light of the empirical findings. We conclude that none of these three models is unequivocally supported by the empirical data. We suggest that this may be the case because the models, experimental tasks and measurement techniques are not sophisticated enough. We make some specific proposals for the resolution of these problems.
Year of publication: |
1993
|
---|---|
Authors: | Bolger, F. ; Wright, G |
Published in: |
Omega. - Elsevier, ISSN 0305-0483. - Vol. 21.1993, 6, p. 629-644
|
Publisher: |
Elsevier |
Keywords: | expertise subjective probability judgement and decision making training decision support |
Saved in:
Saved in favorites
Similar items by person
-
Bolger, Fergus, (1999)
-
Scenario planning interventions in organizations: an analysis of the causes of success and failure
Wright, G, (2008)
-
Exploring e-government futures through the application of scenario planning
Cairns, G, (2004)
- More ...