| Neural Network Toolbox | Search  Help Desk |
| traingdm | See Also |
Gradient descent w/momentum backpropagation
[net,tr] = traingdm(net,Pd,Tl,Ai,Q,TS,VV)
info = traingdm(code)
traingdm is a network training function that updates weight and bias values according to gradient descent with adaptive learning rate.
traingdm(net,Pd,Tl,Ai,Q,TS,VV) takes these inputs,
Ai - Initial input delay conditions.
VV - Either empty matrix [] or structure of validation vectors.
TR - Training record of various values over each epoch:
TR.lr - adaptive learning rate. Training occurs according to the traingdm's training parameters shown here with their default values:
net.trainParam.epochs 10 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.lr 0.01 Learning rate
net.trainParam.max_fail 5 Maximum validation failures
net.trainParam.mc 0.9 Momentum constant.
net.trainParam.min_grad 1e-10 Minimum performance gradient
net.trainParam.show 25 Epochs between showing progress
net.trainParam.time inf Maximum time to train in seconds
Pd - No x Ni x TS cell array, each element P{i,j,ts} is a Dij x Q matrix.
Tl - Nl x TS cell array, each element P{i,ts} is a Vi x Q matrix.
Ai - Nl x LD cell array, each element Ai{i,k} is an Si x Q matrix.
Dij = Ri * length(net.inputWeights{i,j}.delays)
VV is not [], it must be a structure of validation vectors,
VV.PD - Validation delayed inputs.
VV.Tl - Validation layer targets.
VV.Ai - Validation initial input conditions.
VV.TS - Validation time steps.
max_fail epochs in a row.
traingdm(code) returns useful information for each code string:
You can create a standard network that uses traingdm with newff, newcf, or newelm.
To prepare a custom network to be trained with traingdm:
.net.trainFcn to 'traingdm'. This will set net.trainParam to traingdm's
default parameters.
.net.trainParam properties to desired values.
train with the resulting network will train the network with traingdm.
See newff, newcf, and newelm for examples.
traingdm can train any network as long as its weight, net input, and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance perf with respect to the weight and bias variables X. Each variable is adjusted according to gradient descent with momentum,
dX = mc*dXprev + lr*mc*dperf/dXwhere
dXprev is the previous change to the weight or bias.
Training stops when any of these conditions occur:
.epochs (repetitions) is reached.
.time has been exceeded.
.goal.
.mingrad.
.max_fail times since the
last time it decreased (when using validation).
newff, newcf, traingd, traingda, traingdx, trainlm