Comparing the Validity of Alternative Belief Languages : An Experimental Approach
The problem of modeling uncertainty and inexact reasoning inrule-based expert systems is challenging on nonnative as well oncognitive grounds. First, the modular structure of the rule-basedarchitecture does not lend itself to standard Bayesianinference techniques. Second, there is no consensus on how tomodel human (expert) judgement under uncertainty. These factorshave led to a proliferation of quasi-probabilistic belief calculiwhich are widely-used in practice. This paper investigates thedescriptive and external validity of three well-known quot;belieflanguages:quot; the Bayesian, ad-hoc Bayesian, and the certaintyfactors languages. These models are implemented in manycommercial expert system shells, and their validity is clearly animportant issue for users and designers of expert systems. Themethodology consists of a controlled, within-subject experimentdesigned to measure the relative performance of alternativebelief languages. The experiment pits the judgement of humanexperts with the recommendations generated by their simulatedexpert systems, each using a different belief language. Specialemphasis is given to the general issues of validating belieflanguages and expert systems at large