Next: Different Variants of the
Up: New Levenberg-Marquardt Training Algorithms
Previous: New Levenberg-Marquardt Training Algorithms
  Contents
  Index
Following Gelenbe's notations, let us write
|
(1010) |
|
(1011) |
|
(1012) |
where
represents the neuron index, the weights,
the excitation and inhibition external signals for neuron , and the output of the neuron . Define
|
(1013) |
recalling that
|
(1014) |
The mathematical formulation of LM method applied to RNN is as follows:
- We define a generic vector , of elements, containing the adjustable parameters and ;
; is the parameter vector at step of the training process.
- We define also
the gradient vector, where
for
. Denote by the gradient vector at point .
- The weights update based on Newton method is as follows:
|
(1015) |
where is the Newton's direction obtained by solving the system
|
(1016) |
being the Hessian matrix at step .
For LM, the Hesssian matrix
is approximated by:
|
(1017) |
Here, and is the Jacobian matrix at step , given by
|
(1018) |
where
is the prediction error of neuron , is the output of neuron at the output layer, is the desired output of neuron at the output layer, and is the number of outputs multiplied by the number of training examples .
Each element of matrix is computed using the following equations:
|
(1019) |
|
(1020) |
From Eq. 10.16, we obtain:
|
(1021) |
Equations 10.17 and 10.21 give:
|
(1022) |
By grouping Equations 10.15 and 10.22 we obtain:
|
(1023) |
To compute vector , we use:
|
(1024) |
where
.
Next: Different Variants of the
Up: New Levenberg-Marquardt Training Algorithms
Previous: New Levenberg-Marquardt Training Algorithms
  Contents
  Index
Samir Mohamed
2003-01-08