Hybrid learning schemes for fast training of feed-forward neural networks
Fast training of feed-forward neural networks became increasingly important as the neural network field moves toward maturity. This paper begins with a review of various criteria proposed for training feed-forward neural networks, which include the frequently used quadratic error criterion, the relative entropy criterion, and a generalized training criterion. The minimization of these criteria using the gradient descent method results in a variety of supervised learning algorithms. The performance of these algorithms in complex training tasks is strongly affected by the initial set of internal representations, which are usually formed by a randomly generated set of synaptic weights. The convergence of gradient descent based learning algorithms in complex training tasks can be significantly improved by initializing the internal representations using an unsupervised learning process based on linear or nonlinear generalized Hebbian learning rules. The efficiency of the hybrid learning scheme presented in this paper is illustrated through experimental results.