Next: Gradient Descent Training Algorithm
Up: New Random Neural Network
Previous: New Random Neural Network
  Contents
  Index
We have shown that the Random Neural Network
(RNN) [45,51,46] performs well for multimedia quality
evaluation. A possible problem with RNN for large
applications (large number of neurons and large size of training
examples) is that it can take a significant amount of time to be
trained. The training algorithm in the available software associated
with this tool is the gradient descent one proposed by the inventor of
RNN, Erol Gelenbe [47]. This algorithm can be slow and may require a high number of iterations to reach the desired performance. In addition, it suffers from the zigzag behavior (the error decreases to a local minimum, then increases again, then decreases, and so on).
This problem oriented our research in order to find new training
algorithms for RNN, inspired by the fact that for ANN, there are many training methods with different characteristics. We present in this Chapter two new training algorithms for RNN. The first one is inspired from the
Levenberg-Marquardt (LM) training algorithm for ANN [55]. The second one is inspired from a recently proposed training algorithm for ANN referred as LM with adaptive momentum [8]. This algorithm aims to overcome some of the drawbacks of the traditional LM method for feedforward neural networks.
We start by describing the gradient descent algorithm and then we
present our training algorithms for RNN. We then evaluate them through a comparative study of their performances. Finally, we give some conclusions.
Next: Gradient Descent Training Algorithm
Up: New Random Neural Network
Previous: New Random Neural Network
  Contents
  Index
Samir Mohamed
2003-01-08