A Neural Network Architecture for the Categorization of Temporal Information
In this paper we propose a neural network architecture that is capable of continuous learning multiple, possibly overlapping, arbitrary input sequences relatively quickly, autonomously and online. The architecture of this network has been constructed according to design principles derived from neuroscience and existing work on recurrent network models. The network utilizes sigmoid-pulse generating spiking neurons together with a Hebbian learning rule with synaptic noise. Combined with coincidence detection and an internal feedback mechanism, this leads to a learning process that is driven by dynamic adjustment of the learning rate. This gives the network the ability to not only adjust incorrectly recalled parts of a sequence but also to reinforce and stabilize the recall of previously acquired sequences. The performance of the network is tested with a set of overlapping sequences from an existing problem domain and the relative contribution from each design principle is analyzed