Batch gradient descent with momentum training
Modifies the neural network weights and biases according to the training set.
BATCHgdwm.MLPnet(net,P,T, n.epochs, n.threads=0L)
net |
Neural Network to train. |
P |
Input data set. |
T |
Target output data set. |
n.epochs |
Number of epochs to train |
n.threads |
Number of threads to spawn. If <1, spawns NumberProcessors-1 threads. If no OpenMP is found, this argument will be ignored. |
This functions returns a neural network object modified according to the chosen data.
Manuel Castejón Limas. manuel.castejon@gmail.com
Joaquin Ordieres Meré. j.ordieres@upm.es
Ana González Marcos. ana.gonzalez@unirioja.es
Alpha V. Pernía Espinoza. alpha.pernia@unirioja.es
Francisco Javier Martinez de Pisón. fjmartin@unirioja.es
Fernando Alba Elías. fernando.alba@unavarra.es
Simon Haykin. Neural Networks – a Comprehensive Foundation. Prentice Hall, New Jersey, 2nd edition, 1999. ISBN 0-13-273350-1.
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.