A multi-interacting perceptron model with continuous outputs
We consider learning and generalization of real functions by a multi-interacting feed-forward network model with continuous outputs with invertible transfer functions. The expansion in different multi-interacting orders provides a classification for the functions to be learnt and suggests the learning rules, that reduce to the Hebb-learning rule only for the second order, linear perceptron. The over-sophistication problem is straightforwardly overcome by a natural cutoff in the multi-interacting synapses: the student is able to learn the architecture of the target rule, that is, the simpler a rule is the faster the multi-interacting perception may learn. Simulation results are in excellent agreement with analytical calculations.
Year of publication: |
1997
|
---|---|
Authors: | de Almeida, R.M.C. ; Botelho, E. |
Published in: |
Physica A: Statistical Mechanics and its Applications. - Elsevier, ISSN 0378-4371. - Vol. 242.1997, 1, p. 27-37
|
Publisher: |
Elsevier |
Saved in:
Saved in favorites
Similar items by person
-
Bursts and cavity formation in Hydra cells aggregates: experiments and simulations
Mombach, José C.M., (2001)
-
Scaling properties of three-dimensional foams
de Almeida, R.M.C., (1997)
-
Tsallis entropy production for diffusion on the diluted hypercube
Lemke, N., (2003)
- More ...