Model Selection Accuracy in Behavioral Game Theory : A Simulation
We simulate a horse race between several behavioral models of play in one-shot games. First, we find that many models can lead to identical predictions, making it impossible to select a unique winning model. This is largely avoided by comparing only two models. But even then we find that cross-validation sometimes fails to select the true model, often because models are be estimated to be noiseless but then fail to predict out-of-sample data. The Bayesian Information Criterion avoids this problem, though the inflexibility of its parameter penalty appears to cause poor performance in certain settings