How to overcome the Jeffreys-Lindleys Paradox for Invariant Bayesian Inference in Regression Models
We obtain invariant expressions for prior probabilities and priors on the parameters of nested regression models that are induced by a prior on the parameters of an encompassing linear regression model. The invariance is with respect to specifications that satisfy a necessary set of assumptions. Invariant expressions for posterior probabilities and posteriors are induced in an identical way by the respective posterior. These posterior probabilities imply a posterior odds ratio that is robust to the Jeffreys-Lindleys paradox. This results because the prior odds ratio obtained from the induced prior probabilities corrects the Bayes factor for the plausibility of the competing models reflected in the prior. We illustrate the analysis, where we focus on the construction of specifications that satisfy the set of assumptions, with examples of linear restrictions, i.e. a linear regression model, and non-linear restrictions, i.e. a cointegration and ARMA(l,l) model, on the parameters of an encompassing linear regression model.