Generalised Fading Memory Learning in a Cobweb Model: some evidence
We develop a learning rule that generalises the well known fading memory learning in the sense that the weights attached to the available time series data are not constant and are updated in light of the prediction error(s). The underlying idea is that confidence in the available data will be low when large errors have been realized (e.g. in times of higher volatility) and vice versa. A class of functional forms compatible with this idea is analysed in the context of a standard Cobweb model with boundedly rational agents. We study the problem of convergence to the perfect foresight equilibrium (both local and global) and give conditions that ensure the coexistence of different attractors. We refer to both experimental and numerical evidence to establish the possible range of application of the generalised fading memory learning