A Study on Basin of Attraction of Associative Memory
The Basins of Attraction of a new Hopfield Learning Rule (1999)
Introduction to Neural Networks (1998)
Associative Data Storage and Retrieval in Neural Networks (1995)
On the Storage Capacity of nonlinear Neural Networks (1995)
Attractors In Recurrent Behavior Networks (1997)
Increasing the capacity of a Hopfield network without sacrificing functionality (1997)
Basin of Attraction of Associative Memory as it is Evolved by a Genetic Algorithm (1996)
The Capacity and Attractor Basins of Associative Memory Models (1999)
The delta rule
An important genteralisiation of the perceptron training algorithm, presented by Widrow and Hoff as the "least mean square' (LMS) learning procedure, extends this technique to continuous inputs and outputs. The LMS procedure, also known as the delta rule, has been applied most often with purely linear output units.
The LMS procedure finds the values of all the weights that minimise the error function by a method called gradient descent. The idea is to make a change in the weight proportional to the negative of the derivative of the error as messured on the current pattern with respect to each weight.[Ben J. A., P. Patrick]
The delta rule is limited by having only two layers of processing units (and one layer of weights). The computational abilities of neural networks with nonlinear processing units can be extended by including layers that intervene between input and output. (Is this also true for multi-layered networks of linear processing units? Why or why not?). The problem with multi-layer networks is how to assign error to units in intermediate (hidden) layers, for which target values do not exist and indeed cannot be known a priori.
Delta rule and the perceptron learning rule to the same training data will generally lead to different results.
Delta Rule 1, 2
deta learning rule,
NEURAL NETWORK MODELS
- Information Thoery
- MIT Course Page
- Neuro Network1, 2