Bayesian Learning of Noisy Markov Decision Processes
This work addresses the problem of estimating the optimal value function in a MarkovDecision Process from observed state-action pairs. We adopt a Bayesian approach toinference, which allows both the model to be estimated and predictions about actions tobe made in a unified framework, providing a principled approach to mimicry of a controlleron the basis of observed data. A new Markov chain Monte Carlo (MCMC) sampler isdevised for simulation from the posterior distribution over the optimal value function.This step includes a parameter expansion step, which is shown to be essential for goodconvergence properties of the MCMC sampler. As an illustration, the method is appliedto learning a human controller.