Solving Stochastic Dynamic Programming Problems Using Rules Of Thumb
This paper develops a new method for constructing approximate solutions to discrete time, infinite horizon, discounted stochastic dynamic programming problems with convex choice sets. The key idea is to restrict the decision rule to belong to a parametric class of function. The agent then chooses the best decision rule from within this class. Monte Carlo simulations are used to calculate arbitrarily precise estimates of the optimal decision rule parameters. The solution method is used to solve a version of the Brock-Mirman (1972) stochastic optimal growth model. For this model, relatively simple rules of thumb provide very good approximations to optimal behavior.