Speed of convergence of recursive least squares learning with ARMA perceptions
This paper fills a gap in the existing literature on least squares learning in linear rational expectations models by studying a setup in which agents learn by fitting ARMA models to a subset of the state variables. This is a natural specification in models with private information because in the presence of hidden state variables, agents have an incentive to condition forecasts on the infinite past records of observables. We study a particular setting in which it suffices for agents to fit a first order ARMA process, which preserves the tractability of a finite dimensional parameterization, while permitting conditioning on the infinite past record. We describe how previous results (Marcet and Sargent [1989a, 1989b] can be adapted to handle the convergence of estimators of an ARMA process in our self--referential environment. We also study ``rates'' of convergence analytically and via computer simulation.
Year of publication: |
1992-05
|
---|---|
Authors: | Marcet, Albert ; Sargent, Thomas J. |
Institutions: | Department of Economics and Business, Universitat Pompeu Fabra |
Saved in:
Saved in favorites
Similar items by person
-
Optimal taxation without state-contingent debt
Marcet, Albert, (1996)
-
Optimal Taxation without State-Contingent Debt
Aiyagari, S. Rao, (2002)
-
Convergence of least squares learning mechanisms in self-referential linear stochastic models
Marcet, Albert, (1989)
- More ...