MITIGATION OF THE LUCAS CRITIQUE WITH STOCHASTIC CONTROL METHODS
Lucas (1976) pointed out, that when optimization is performed on a deterministic macro model, the resulting policy may not reflect the true optimal solution. Private agents may react to announced policies and consequently model parameters will start to drift. The aim of this paper is to develop a methodology for deriving an optimal policy in the presence of rational expectations and parameter drift. This drift is captured by a stochastic optimization framework with time varying parameters. The resulting optimal policy is capable of tracking changes in the parameters due to policy changes. A numerical example illustrates how the methodology provides a way to mitigate the effects of the Lucas critique.