On the overtraining phenomenon of backpropagation neural networks
A very important subject for the consolidation of neural networks is the study of their capabilities. In this paper, the relationships between network size, training set size and generalization capabilities are examined. The phenomenon of overtraining in backpropagation networks is discussed and an extension to an existing algorithm is described. The extended algorithm provides a new energy function and its advantages, such as improved plasticity and performance along with its dynamic properties, are explained. The algorithm is applied to some common problems (XOR, numeric character recognition and function approximation) and simulation results are presented and discussed.
Year of publication: |
1996
|
---|---|
Authors: | Tzafestas, S.G. ; Dalianis, P.J. ; Anthopoulos, G. |
Published in: |
Mathematics and Computers in Simulation (MATCOM). - Elsevier, ISSN 0378-4754. - Vol. 40.1996, 5, p. 507-521
|
Publisher: |
Elsevier |
Saved in:
Online Resource
Saved in favorites
Similar items by person
-
Multidimensional state-space models: A comparative overview
Tzafestas, S.G., (1984)
-
Borne, P., (1996)
-
Stability analysis of an adaptive fuzzy control system using Petri Nets and learning automata
Tzafestas, S.G., (2000)
- More ...